March 29, 2026 · 16 min read

Build an SEO Audit Dashboard in Elixir with Phoenix LiveView and the SEOPeek API

Elixir and Phoenix LiveView are a natural fit for real-time dashboards. The BEAM VM handles thousands of concurrent processes with barely any memory overhead, LiveView pushes updates to the browser over WebSockets without a line of JavaScript, and OTP supervision trees keep your background audit workers running indefinitely. This guide walks you through building a production-grade SEO audit dashboard that audits URLs in real-time, streams results to connected clients, batch-processes large URL lists with rate limiting, stores audit history in PostgreSQL, and schedules periodic site-wide scans with Quantum.

In this guide
  1. Setting up a Phoenix LiveView project
  2. Creating an SEOPeek API client module
  3. LiveView component for real-time URL auditing
  4. Streaming results with Phoenix PubSub
  5. GenServer for batch auditing with rate limiting
  6. Storing audit history in Ecto and PostgreSQL
  7. Scheduling periodic audits with Quantum
  8. Pricing and rate limits
  9. FAQ

1. Setting Up a Phoenix LiveView Project

Start by generating a new Phoenix project with LiveView enabled. Phoenix 1.7+ includes LiveView by default, so no extra flags are needed.

mix phx.new seopeek_dashboard --database postgres
cd seopeek_dashboard

Add the dependencies we need to mix.exs. We use Finch for HTTP requests (it ships with Phoenix but we need to configure a named pool), Jason for JSON parsing (already included), and Quantum for cron-style scheduling.

defp deps do
  [
    {:phoenix, "~> 1.7"},
    {:phoenix_html, "~> 4.1"},
    {:phoenix_live_view, "~> 1.0"},
    {:phoenix_live_dashboard, "~> 0.8"},
    {:ecto_sql, "~> 3.12"},
    {:postgrex, ">= 0.0.0"},
    {:finch, "~> 0.19"},
    {:jason, "~> 1.4"},
    {:quantum, "~> 3.5"},
    {:bandit, "~> 1.5"}
  ]
end

Fetch dependencies and create the database:

mix deps.get
mix ecto.create

Next, configure a named Finch pool in lib/seopeek_dashboard/application.ex. Add it to the supervision tree so it starts automatically and is available to every process in the application.

def start(_type, _args) do
  children = [
    SeopeekDashboardWeb.Telemetry,
    SeopeekDashboard.Repo,
    {DNSCluster, query: Application.get_env(:seopeek_dashboard, :dns_cluster_query) || :ignore},
    {Phoenix.PubSub, name: SeopeekDashboard.PubSub},
    {Finch, name: SeopeekDashboard.Finch},
    SeopeekDashboard.AuditWorker,
    SeopeekDashboard.Scheduler,
    SeopeekDashboardWeb.Endpoint
  ]

  opts = [strategy: :one_for_one, name: SeopeekDashboard.Supervisor]
  Supervisor.start_link(children, opts)
end

Notice that we also added SeopeekDashboard.AuditWorker and SeopeekDashboard.Scheduler to the supervision tree. We will build those modules in later sections. The key point is that OTP supervision means these processes restart automatically if they crash—your audit pipeline is self-healing.

2. Creating an SEOPeek API Client Module

The SEOPeek API is a single GET endpoint. You pass the target URL as a query parameter, and it returns a JSON object containing a numeric score, a letter grade, and detailed pass/fail results for 20+ on-page SEO checks. No API key is needed for the free tier.

The endpoint:

GET https://us-central1-todd-agent-prod.cloudfunctions.net/seopeekApi/api/v1/audit?url=TARGET_URL

Create the client module at lib/seopeek_dashboard/seopeek_client.ex:

defmodule SeopeekDashboard.SeopeekClient do
  @moduledoc """
  HTTP client for the SEOPeek audit API.
  Uses Finch for connection pooling and efficient HTTP/2 requests.
  """

  @base_url "https://us-central1-todd-agent-prod.cloudfunctions.net/seopeekApi/api/v1/audit"

  @doc """
  Audits a single URL and returns the parsed result.
  Returns {:ok, map()} on success, {:error, reason} on failure.
  """
  def audit_url(url, opts \\ []) do
    api_key = Keyword.get(opts, :api_key)
    query = URI.encode_query(%{"url" => url})
    full_url = "#{@base_url}?#{query}"

    headers = build_headers(api_key)

    request = Finch.build(:get, full_url, headers)

    case Finch.request(request, SeopeekDashboard.Finch, receive_timeout: 30_000) do
      {:ok, %Finch.Response{status: 200, body: body}} ->
        {:ok, Jason.decode!(body)}

      {:ok, %Finch.Response{status: status, body: body}} ->
        {:error, "HTTP #{status}: #{body}"}

      {:error, reason} ->
        {:error, "Request failed: #{inspect(reason)}"}
    end
  end

  defp build_headers(nil), do: [{"accept", "application/json"}]
  defp build_headers(api_key) do
    [{"accept", "application/json"}, {"x-api-key", api_key}]
  end
end

A few things to notice. The Finch pool is referenced by its registered name (SeopeekDashboard.Finch) rather than creating a new connection on every call. Finch maintains a connection pool internally with HTTP/2 multiplexing, so concurrent audit requests share connections efficiently. The receive_timeout of 30 seconds prevents a single slow audit from blocking the caller indefinitely.

The api_key option is nil by default, which works for the free tier (50 audits/day). When you upgrade to the Starter or Pro plan, pass the key and it gets included as a request header automatically.

Tip: For production, configure the Finch pool size in your application config: {Finch, name: SeopeekDashboard.Finch, pools: %{default: [size: 25, count: 1]}}. This allows up to 25 concurrent connections to the SEOPeek API.

3. LiveView Component for Real-Time URL Auditing

This is where Elixir shines. With Phoenix LiveView, you can build a fully interactive SEO audit form that submits a URL, kicks off an async audit, and renders the result—all without writing any JavaScript. The server pushes HTML diffs to the browser over a persistent WebSocket connection.

Create the LiveView at lib/seopeek_dashboard_web/live/audit_live.ex:

defmodule SeopeekDashboardWeb.AuditLive do
  use SeopeekDashboardWeb, :live_view

  alias SeopeekDashboard.SeopeekClient
  alias SeopeekDashboard.Audits

  @impl true
  def mount(_params, _session, socket) do
    if connected?(socket) do
      Phoenix.PubSub.subscribe(SeopeekDashboard.PubSub, "audit:results")
    end

    {:ok,
     assign(socket,
       url: "",
       loading: false,
       result: nil,
       error: nil,
       recent_audits: Audits.list_recent(10)
     )}
  end

  @impl true
  def handle_event("audit", %{"url" => url}, socket) do
    url = String.trim(url)

    if url == "" do
      {:noreply, assign(socket, error: "Please enter a URL")}
    else
      # Spawn an async task so the LiveView stays responsive
      pid = self()

      Task.start(fn ->
        case SeopeekClient.audit_url(url) do
          {:ok, result} ->
            send(pid, {:audit_complete, url, result})

          {:error, reason} ->
            send(pid, {:audit_error, url, reason})
        end
      end)

      {:noreply, assign(socket, loading: true, error: nil, result: nil)}
    end
  end

  @impl true
  def handle_info({:audit_complete, url, result}, socket) do
    # Persist the result to the database
    Audits.save_audit(url, result)

    # Broadcast to all connected LiveViews
    Phoenix.PubSub.broadcast(
      SeopeekDashboard.PubSub,
      "audit:results",
      {:new_audit, url, result}
    )

    {:noreply,
     assign(socket,
       loading: false,
       result: result,
       recent_audits: Audits.list_recent(10)
     )}
  end

  def handle_info({:audit_error, _url, reason}, socket) do
    {:noreply, assign(socket, loading: false, error: reason)}
  end

  def handle_info({:new_audit, _url, _result}, socket) do
    {:noreply, assign(socket, recent_audits: Audits.list_recent(10))}
  end
end

The flow is straightforward. When the user submits a URL, the "audit" event handler spawns an async Task so the LiveView process is never blocked. The task calls the SEOPeek API client, then sends the result back to the LiveView via send/2. The handle_info/2 callback receives the result, persists it to the database, broadcasts it via PubSub, and updates the socket assigns. LiveView then automatically pushes the minimal HTML diff to the browser.

Here is the corresponding template. Create lib/seopeek_dashboard_web/live/audit_live.html.heex:

<div class="max-w-3xl mx-auto p-8">
  <h1 class="text-3xl font-bold mb-8">SEO Audit Dashboard</h1>

  <form phx-submit="audit" class="flex gap-4 mb-8">
    <input
      type="url"
      name="url"
      value={@url}
      placeholder="https://example.com"
      class="flex-1 px-4 py-3 bg-zinc-900 border border-zinc-700 rounded-lg"
      required
    />
    <button
      type="submit"
      disabled={@loading}
      class="px-6 py-3 bg-emerald-500 text-black font-semibold rounded-lg"
    >
      <%= if @loading, do: "Auditing...", else: "Audit URL" %>
    </button>
  </form>

  <%= if @error do %>
    <div class="p-4 bg-red-900/20 border border-red-800 rounded-lg mb-6">
      <%= @error %>
    </div>
  <% end %>

  <%= if @result do %>
    <div class="p-6 bg-zinc-900 border border-zinc-700 rounded-xl mb-8">
      <div class="flex items-center justify-between mb-4">
        <h2 class="text-xl font-semibold"><%= @result["url"] %></h2>
        <span class="text-3xl font-bold text-emerald-400">
          <%= @result["score"] %>/100
        </span>
      </div>
      <div class="grid grid-cols-2 gap-3">
        <%= for {check_name, check} <- @result["checks"] || %{} do %>
          <div class={"p-3 rounded-lg #{if check["pass"], do: "bg-emerald-900/20", else: "bg-red-900/20"}"}>
            <div class="font-mono text-sm"><%= check_name %></div>
            <div class="text-xs text-zinc-400 mt-1"><%= check["message"] %></div>
          </div>
        <% end %>
      </div>
    </div>
  <% end %>

  <h3 class="text-lg font-semibold mb-4">Recent Audits</h3>
  <table class="w-full text-sm">
    <thead>
      <tr class="border-b border-zinc-800">
        <th class="py-2 text-left">URL</th>
        <th class="py-2 text-right">Score</th>
        <th class="py-2 text-right">Grade</th>
        <th class="py-2 text-right">Date</th>
      </tr>
    </thead>
    <tbody>
      <%= for audit <- @recent_audits do %>
        <tr class="border-b border-zinc-900">
          <td class="py-2"><%= audit.url %></td>
          <td class="py-2 text-right font-mono"><%= audit.score %></td>
          <td class="py-2 text-right"><%= audit.grade %></td>
          <td class="py-2 text-right text-zinc-500"><%= audit.inserted_at %></td>
        </tr>
      <% end %>
    </tbody>
  </table>
</div>

No JavaScript. No REST endpoints. No client-side state management library. The form submits via WebSocket, the server processes the audit, and LiveView patches the DOM. Every connected browser sees updates in real-time because of PubSub, which we will configure next.

4. Streaming Results with Phoenix PubSub

Phoenix PubSub is a distributed publish-subscribe system that ships with every Phoenix project. When an audit completes—whether triggered by a user in the LiveView, a batch GenServer, or a scheduled Quantum job—it broadcasts the result to a topic. Every LiveView subscribed to that topic receives the update and re-renders.

The pattern is simple. In the LiveView's mount/3, subscribe:

Phoenix.PubSub.subscribe(SeopeekDashboard.PubSub, "audit:results")

From any process that completes an audit, broadcast:

Phoenix.PubSub.broadcast(
  SeopeekDashboard.PubSub,
  "audit:results",
  {:new_audit, url, result}
)

Every connected LiveView receives this message in handle_info/2 and can update its assigns. This is what makes the dashboard feel real-time: when the batch worker audits a URL, every open browser tab sees the result appear without polling or refreshing.

PubSub also works across nodes in a clustered deployment. If you run multiple instances of your Phoenix app behind a load balancer, a broadcast on one node reaches LiveViews on every other node automatically. This is a core BEAM capability that would require Redis or a message queue in most other languages.

5. GenServer for Batch Auditing with Rate Limiting

For auditing large lists of URLs—an entire sitemap, a crawl export, or a client's full site—you need a background worker that processes URLs at a controlled rate. A GenServer is the right abstraction: it maintains state (the URL queue, active count, results), handles incoming messages (new URLs to audit), and schedules its own work with Process.send_after/3.

Create lib/seopeek_dashboard/audit_worker.ex:

defmodule SeopeekDashboard.AuditWorker do
  use GenServer
  require Logger

  alias SeopeekDashboard.SeopeekClient
  alias SeopeekDashboard.Audits

  @max_concurrency 5
  @delay_between_batches 1_000  # 1 second between batches

  # --- Client API ---

  def start_link(_opts) do
    GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
  end

  @doc "Enqueue a list of URLs for batch auditing."
  def enqueue(urls) when is_list(urls) do
    GenServer.cast(__MODULE__, {:enqueue, urls})
  end

  @doc "Get the current queue status."
  def status do
    GenServer.call(__MODULE__, :status)
  end

  # --- Server Callbacks ---

  @impl true
  def init(_) do
    state = %{
      queue: :queue.new(),
      pending: 0,
      total: 0,
      completed: 0,
      failed: 0
    }
    {:ok, state}
  end

  @impl true
  def handle_cast({:enqueue, urls}, state) do
    queue = Enum.reduce(urls, state.queue, fn url, q ->
      :queue.in(url, q)
    end)

    new_state = %{state | queue: queue, total: state.total + length(urls)}
    schedule_next_batch()
    {:noreply, new_state}
  end

  @impl true
  def handle_call(:status, _from, state) do
    status = %{
      queued: :queue.len(state.queue),
      pending: state.pending,
      total: state.total,
      completed: state.completed,
      failed: state.failed
    }
    {:reply, status, state}
  end

  @impl true
  def handle_info(:process_batch, state) do
    available = @max_concurrency - state.pending
    {urls_to_process, remaining_queue} = dequeue_n(state.queue, available)

    # Spawn a task for each URL
    Enum.each(urls_to_process, fn url ->
      Task.start(fn ->
        result = SeopeekClient.audit_url(url)
        send(__MODULE__, {:audit_done, url, result})
      end)
    end)

    new_pending = state.pending + length(urls_to_process)
    {:noreply, %{state | queue: remaining_queue, pending: new_pending}}
  end

  def handle_info({:audit_done, url, {:ok, result}}, state) do
    Audits.save_audit(url, result)

    Phoenix.PubSub.broadcast(
      SeopeekDashboard.PubSub,
      "audit:results",
      {:new_audit, url, result}
    )

    Logger.info("Audited #{url}: score #{result["score"]}")
    new_state = %{state | pending: state.pending - 1, completed: state.completed + 1}
    maybe_schedule_next(new_state)
    {:noreply, new_state}
  end

  def handle_info({:audit_done, url, {:error, reason}}, state) do
    Logger.warning("Audit failed for #{url}: #{reason}")
    new_state = %{state | pending: state.pending - 1, failed: state.failed + 1}
    maybe_schedule_next(new_state)
    {:noreply, new_state}
  end

  # --- Private Helpers ---

  defp schedule_next_batch do
    Process.send_after(self(), :process_batch, @delay_between_batches)
  end

  defp maybe_schedule_next(state) do
    if :queue.len(state.queue) > 0 and state.pending < @max_concurrency do
      schedule_next_batch()
    end
  end

  defp dequeue_n(queue, n) do
    Enum.reduce_while(1..max(n, 1), {[], queue}, fn _, {acc, q} ->
      case :queue.out(q) do
        {{:value, item}, new_q} -> {:cont, {[item | acc], new_q}}
        {:empty, q} -> {:halt, {acc, q}}
      end
    end)
    |> then(fn {urls, q} -> {Enum.reverse(urls), q} end)
  end
end

The rate limiting strategy is intentionally simple. The GenServer processes up to 5 URLs concurrently. After each batch, it waits 1 second before starting the next batch. This maps to roughly 5 audits per second, which stays well within the SEOPeek API rate limits for all plans. When an audit completes, the result is saved to PostgreSQL, broadcast via PubSub, and the next URL in the queue starts.

To kick off a batch audit from anywhere in your application:

# From a LiveView, controller, or IEx console
SeopeekDashboard.AuditWorker.enqueue([
  "https://example.com",
  "https://example.com/about",
  "https://example.com/pricing",
  "https://example.com/blog"
])

# Check progress
SeopeekDashboard.AuditWorker.status()
# => %{queued: 2, pending: 2, total: 4, completed: 0, failed: 0}

Why not Task.async_stream? Task.async_stream works for fire-and-forget batch jobs, but a GenServer gives you persistent state (queue length, completed count), the ability to add URLs while a batch is running, and automatic restart via the supervision tree. If the worker crashes mid-batch, the supervisor restarts it and you can re-enqueue the remaining URLs.

6. Storing Audit History in Ecto and PostgreSQL

Every audit result should be persisted so you can track SEO scores over time, identify regressions, and generate historical reports. Create an Ecto schema and migration for audit records.

Generate the migration:

mix ecto.gen.migration create_audits

Edit the migration file:

defmodule SeopeekDashboard.Repo.Migrations.CreateAudits do
  use Ecto.Migration

  def change do
    create table(:audits) do
      add :url, :string, null: false
      add :score, :integer, null: false
      add :grade, :string, null: false
      add :checks, :map, default: %{}
      add :raw_response, :map, default: %{}

      timestamps()
    end

    create index(:audits, [:url])
    create index(:audits, [:score])
    create index(:audits, [:inserted_at])
  end
end

Create the schema at lib/seopeek_dashboard/audits/audit.ex:

defmodule SeopeekDashboard.Audits.Audit do
  use Ecto.Schema
  import Ecto.Changeset

  schema "audits" do
    field :url, :string
    field :score, :integer
    field :grade, :string
    field :checks, :map, default: %{}
    field :raw_response, :map, default: %{}

    timestamps()
  end

  def changeset(audit, attrs) do
    audit
    |> cast(attrs, [:url, :score, :grade, :checks, :raw_response])
    |> validate_required([:url, :score, :grade])
    |> validate_number(:score, greater_than_or_equal_to: 0, less_than_or_equal_to: 100)
  end
end

And the context module at lib/seopeek_dashboard/audits.ex:

defmodule SeopeekDashboard.Audits do
  import Ecto.Query
  alias SeopeekDashboard.Repo
  alias SeopeekDashboard.Audits.Audit

  def save_audit(url, api_result) do
    attrs = %{
      url: url,
      score: api_result["score"],
      grade: api_result["grade"],
      checks: api_result["checks"] || %{},
      raw_response: api_result
    }

    %Audit{}
    |> Audit.changeset(attrs)
    |> Repo.insert()
  end

  def list_recent(limit \\ 20) do
    Audit
    |> order_by(desc: :inserted_at)
    |> limit(^limit)
    |> Repo.all()
  end

  def get_history(url, limit \\ 30) do
    Audit
    |> where(url: ^url)
    |> order_by(desc: :inserted_at)
    |> limit(^limit)
    |> Repo.all()
  end

  def average_score_by_url do
    Audit
    |> group_by(:url)
    |> select([a], %{url: a.url, avg_score: avg(a.score), audit_count: count(a.id)})
    |> order_by([a], desc: avg(a.score))
    |> Repo.all()
  end
end

Run the migration:

mix ecto.migrate

Now every audit result is stored with the full URL, numeric score, letter grade, individual check results, and the complete raw API response. The get_history/2 function lets you query score trends for a specific URL over time, while average_score_by_url/0 gives you a bird's-eye view of which pages on your site are the strongest and weakest from an SEO perspective.

7. Scheduling Periodic Audits with Quantum

Quantum is the go-to Elixir library for cron-style job scheduling. It runs as a supervised process in your application and triggers functions at specified intervals. This is how you automate daily or weekly site-wide SEO audits without any external cron daemon or task runner.

Create the scheduler at lib/seopeek_dashboard/scheduler.ex:

defmodule SeopeekDashboard.Scheduler do
  use Quantum, otp_app: :seopeek_dashboard
end

Configure the schedule in config/config.exs:

config :seopeek_dashboard, SeopeekDashboard.Scheduler,
  jobs: [
    # Run a full site audit every day at 3:00 AM UTC
    daily_audit: [
      schedule: "0 3 * * *",
      task: {SeopeekDashboard.ScheduledAudits, :run_daily_audit, []}
    ],
    # Run critical page checks every 6 hours
    critical_pages: [
      schedule: "0 */6 * * *",
      task: {SeopeekDashboard.ScheduledAudits, :run_critical_audit, []}
    ]
  ]

Create the task module at lib/seopeek_dashboard/scheduled_audits.ex:

defmodule SeopeekDashboard.ScheduledAudits do
  require Logger

  alias SeopeekDashboard.AuditWorker

  @critical_urls [
    "https://yoursite.com",
    "https://yoursite.com/pricing",
    "https://yoursite.com/features",
    "https://yoursite.com/blog"
  ]

  @full_site_urls [
    "https://yoursite.com",
    "https://yoursite.com/about",
    "https://yoursite.com/pricing",
    "https://yoursite.com/features",
    "https://yoursite.com/blog",
    "https://yoursite.com/docs",
    "https://yoursite.com/contact",
    "https://yoursite.com/changelog"
    # Add all your pages here, or fetch from a sitemap
  ]

  def run_daily_audit do
    Logger.info("Starting scheduled daily SEO audit for #{length(@full_site_urls)} URLs")
    AuditWorker.enqueue(@full_site_urls)
  end

  def run_critical_audit do
    Logger.info("Starting critical page SEO check for #{length(@critical_urls)} URLs")
    AuditWorker.enqueue(@critical_urls)
  end
end

With this configuration, Quantum triggers a full-site audit every night at 3 AM UTC and checks your most important pages every 6 hours. Each audit goes through the GenServer worker (with rate limiting), gets stored in PostgreSQL, and broadcasts to any connected LiveView dashboard. You wake up to a full audit history in your dashboard without lifting a finger.

Scaling tip: For sites with hundreds of pages, fetch URLs dynamically from your sitemap instead of hardcoding them. Write a SeopeekDashboard.SitemapParser module that fetches and parses your sitemap.xml, then pass the URL list to AuditWorker.enqueue/1. The GenServer handles the rate limiting regardless of list size.

Putting it all together

Start the Phoenix server and open your browser:

mix phx.server
# Open http://localhost:4000/audit

You now have a fully functional SEO audit dashboard. Enter a URL in the form and watch the result appear in real-time. Open a second browser tab and see PubSub push the same result there instantly. Check your database for the audit history. Wait for Quantum to trigger the nightly scan, or call SeopeekDashboard.ScheduledAudits.run_daily_audit() from an IEx console to test it immediately.

The entire system runs in a single Elixir application with no external dependencies beyond PostgreSQL. No Redis, no Sidekiq, no separate cron daemon, no JavaScript build step. The BEAM VM handles concurrency, PubSub handles real-time communication, Ecto handles persistence, and Quantum handles scheduling. This is the Elixir advantage.

8. Pricing and Rate Limits

The SEOPeek API has three tiers, all using the same endpoint. The free tier requires no API key.

Plan Audits Price Best for
Free 50 / day $0 Local dev, small sites, trying it out
Starter 1,000 / month $9/mo Daily critical page checks, small teams
Pro 10,000 / month $29/mo Agencies, full-site nightly audits, multiple clients

For the Elixir dashboard, here is how the plans map to use cases:

View all plan details and sign up on the SEOPeek pricing page.

Frequently Asked Questions

Why use Elixir and Phoenix LiveView for an SEO audit dashboard?

Elixir runs on the BEAM virtual machine, which was designed for telecom systems that require extreme concurrency and uptime. A single Elixir process uses about 2 KB of memory, so you can run thousands of concurrent audit tasks without breaking a sweat. Phoenix LiveView eliminates the need for a separate frontend framework—no React, no API layer, no client-side state management. The server pushes HTML diffs over WebSockets, which means your audit results appear in real-time with zero JavaScript. Combined with OTP supervision trees, your audit workers automatically restart if they crash.

How does Phoenix PubSub help with real-time SEO audit results?

Phoenix PubSub is a distributed publish-subscribe system built into every Phoenix project. When any process completes an audit—a LiveView handler, the GenServer batch worker, or a Quantum scheduled job—it broadcasts the result to a PubSub topic. Every LiveView subscribed to that topic receives the message and re-renders. This works across multiple nodes in a clustered deployment without Redis or any external message broker. It is the simplest path to a real-time multi-user dashboard.

Can I schedule automated SEO audits in Elixir?

Yes. Quantum is an Elixir library that provides cron-style job scheduling as a supervised OTP process. You define schedules in your application config (e.g., "0 3 * * *" for daily at 3 AM) and point them at module functions. Each scheduled job enqueues URLs into the GenServer worker, which handles rate limiting and persistence automatically. No external cron daemon, no Sidekiq, no separate process to manage.

Does the SEOPeek API require an API key for the free tier?

The free tier (50 audits/day) requires no API key at all. Send a GET request with the url parameter and you get back a full audit result. For the Starter ($9/month) and Pro ($29/month) plans, you receive an API key to include as an x-api-key header for higher rate limits. The client module in this guide already supports both modes.

How do I rate-limit SEO audit requests in a GenServer?

The GenServer in this guide uses Process.send_after/3 to schedule batch processing at fixed intervals. It maintains a maximum concurrency of 5 simultaneous requests and waits 1 second between batches. This maps to roughly 5 audits per second, which is well within the rate limits for all SEOPeek plans. For more granular control, you can use the ExRated library, which implements a token-bucket rate limiter as a GenServer. Wrap each API call in ExRated.check_rate("seopeek_api", 1_000, 5) to enforce a hard ceiling of 5 requests per second.

Build Your SEO Audit Dashboard Today

50 free audits per day. No API key required. One mix phx.server and you have a real-time SEO audit dashboard.
See pricing plans for higher volumes.

Try SEOPeek Free →