Attribution and Deep Link Support Lab

A support-focused case study featuring event pipelines, API troubleshooting, SQL investigations, and visual diagnostics for real-world SaaS tracking failures

Posted by John Plakon on April 9, 2025
Scenarios Simulated
8
Realistic support cases covering redirects, missing events, duplicates, and attribution mismatches.
Event Types Tracked
4
Click, install, open, and conversion events tied together across a mock attribution flow.
Investigation Queries
12
SQL used to reconcile raw event traffic against final reporting logic and customer symptoms.
Runbooks Written
6
Internal support notes and customer-facing responses for repeatable issue resolution.

Introduction

Modern growth and attribution platforms do not fail in obvious ways. A customer may report that campaign performance looks wrong, that installs are undercounted, or that a deep link opens the wrong destination. On the surface, those sound like dashboard issues. In reality, the root cause may live inside a redirect chain, a malformed API payload, a missing device identifier, duplicate client retries, or a mismatch between raw events and attribution logic downstream.

I built this lab to mirror that exact support engineering workflow. It includes a mock event-tracking pipeline, deep link flow, SQL investigation layer, and visual analysis dashboards designed to explain both what went wrong and how I found it. Instead of presenting a generic analytics notebook, this project focuses on the technical investigation process that support engineers use to move from customer symptom to reproducible root cause.

End-to-End Tracking Flow

The project simulates a simplified but realistic attribution pipeline so that support issues can be reproduced, traced, and documented clearly.

1
Click

Campaign link stores metadata and device context.

2
Redirect

Deep link or fallback page behavior is executed.

3
Open / Install

User action is logged and tied back to prior click data.

4
Conversion

Revenue event is attributed or fails due to data loss.

Sample Support Queue

High
Missing campaign attribution

Clicks are visible in logs, but conversions show up as unattributed in reporting.

Medium
Deep link opens fallback page

Users land on mobile web instead of the in-app destination after ad clicks.

Medium
Duplicate conversion spikes

Revenue doubles in reporting due to client retry behavior without deduplication.

Low
Timezone reporting confusion

Customer thinks events are missing, but the issue is delayed bucketing across time zones.

Visual Overview of the Problem Space

Before diving into code and ticket notes, I wanted the project to visually tell the story of support investigations. The charts below are not just decorative. They are the kinds of visual diagnostics that help explain what the customer sees, what the raw system shows, and where the attribution chain actually breaks.

Attribution Funnel Health
A high-level view of how traffic moves through the pipeline from click to conversion, and where support starts to notice abnormal drop-off.
The largest relative loss occurs between open/install and conversion, which is exactly where a support engineer would begin validating event continuity, payload integrity, and attribution keys.
Issue Volume by Failure Type
This chart shows how support workload clusters around a handful of recurring technical categories rather than random one-off errors.
Redirect problems, duplicate events, and unattributed conversions tend to create the most confusing customer-facing symptoms because the visible issue usually appears downstream from the real cause.
Campaign Trend: Clicks vs Attributed Conversions
A month-by-month comparison that reveals when the pipeline begins to drift from normal behavior even though traffic volume remains healthy.
Notice how clicks remain strong while attributed conversions flatten and dip. That pattern often points support away from acquisition quality and toward implementation, redirect, or event continuity issues.

What I Built

The lab combines a lightweight API, a simulated redirect service, a small event warehouse, and an investigation layer built with SQL. The intent was to reproduce realistic technical support scenarios in a controlled environment so that I could diagnose each issue step by step and document the findings as if I were handing them to customers, product teams, or engineers.

Core System Components

  • Mock tracking API for click, install, open, and conversion events
  • Redirect logic that preserves or intentionally drops campaign parameters
  • SQLite/Postgres event tables for raw and derived event records
  • Support runbooks for repeatable troubleshooting workflows
  • Ticket-ready summaries for customer-safe communication

Support Skills Demonstrated

  • SQL-based data reconciliation and event tracing
  • API debugging and payload validation
  • Chrome Developer Tools network inspection
  • Root cause analysis and bug isolation
  • Clear communication to both technical and non-technical audiences

Case Study: Missing Conversion Attribution

One of the strongest scenarios in the lab involved a customer complaint that campaign clicks were visible, but conversions were not being attributed properly. At first glance, the dashboard made it look like campaign performance had suddenly deteriorated. The deeper investigation showed something much more specific: the click event was being captured correctly, but the downstream conversion payload no longer contained the linking field required to preserve attribution through the redirect and app-open process.

To diagnose this, I validated raw clicks, checked whether install/open events carried the expected device identifiers, and compared the conversion payload structure with known-good events. The problem ended up being a session handoff issue introduced during the redirect phase.

SQL Investigation Query
SELECT
  c.click_id,
  c.campaign_name,
  c.device_id,
  c.click_timestamp,
  v.conversion_timestamp,
  v.revenue
FROM click_events c
LEFT JOIN conversion_events v
  ON c.device_id = v.device_id
WHERE v.conversion_timestamp IS NULL
ORDER BY c.click_timestamp DESC;

This query highlights click records that never generated a downstream attributed conversion. From there, I compared successful versus failed sessions and found that the broken flow was dropping the campaign context before the conversion event was posted.

Healthy vs Broken Session Paths
A comparison of successful and failed event chains reveals how a single missing handoff field can sever attribution downstream.
The broken path still records user activity, but the absence of preserved campaign context causes the final conversion to become unattributed, which is exactly the kind of issue customers often perceive as a reporting bug.

Case Study: Duplicate Events and Inflated Revenue

Another scenario reproduced a spike in conversions that looked positive at first, but was actually caused by duplicate conversion submissions. The underlying problem was a client-side retry pattern with no proper deduplication rule. From a support perspective, this is a great example of why engineers and customers can initially interpret the same data in opposite ways: the dashboard seems to show improvement, while support has to determine whether the increase reflects real customer behavior or technical duplication.

I grouped events by device, campaign, and event type to identify records that should not repeat within the same session window. The resulting investigation clearly showed duplicate conversion clusters tied to retry behavior rather than legitimate user activity.

Duplicate Detection Query
SELECT
  device_id,
  campaign_name,
  event_name,
  COUNT(*) AS duplicate_count
FROM conversion_events
GROUP BY device_id, campaign_name, event_name
HAVING COUNT(*) > 1
ORDER BY duplicate_count DESC;
“This issue is not caused by a reporting lag. The duplicate increase is present in the raw conversion table itself, which indicates the source problem exists before dashboard aggregation.”

Why the Visual Layer Matters

Technical support is not just about finding the answer. It is also about explaining the answer. The visualizations in this project were designed to help different audiences understand the same issue from their own perspective. A customer may need reassurance that the issue has been isolated. A product manager may want to see where in the funnel the failure begins. An engineer may care most about the consistency of identifiers across event tables. Good support work often means translating one technical reality into several different explanations without losing accuracy.

Resolution Time by Issue Category
Some problems are easy to identify but hard to explain, while others are easy to explain but difficult to isolate technically.
Redirect issues and attribution continuity problems often consume more time because they require cross-checking browser behavior, API payloads, and warehouse outcomes together rather than in isolation.

How I Structured the Support Workflow

Each scenario in the lab follows a repeatable support engineering process:

  • Reproduce the issue in a controlled environment
  • Inspect redirect behavior and network traffic in the browser
  • Validate request payloads and response patterns
  • Compare raw tables against reporting logic with SQL
  • Determine whether the issue is implementation error, product defect, expected behavior, or data latency
  • Write an internal summary and a customer-facing response

That workflow is what makes this project relevant to technical support engineering roles. It shows not only that I can analyze data, but that I can diagnose technical failures, document the resolution path, and communicate clearly across functions.

Tech Stack and Investigation Toolkit

The implementation is intentionally practical rather than overengineered. The value of the project is in the support workflow, not in flashy infrastructure. I focused on tools that let me move quickly from reproduction to evidence to documented resolution.

SQL JavaScript HTML / CSS FastAPI / Flask SQLite / Postgres Chrome DevTools API Payload Debugging Support Runbooks Attribution Diagnostics

Conclusion

This project was designed to look and feel like the work of a real support engineer, not just an analyst. It combines debugging, SQL investigation, browser inspection, event tracing, and documentation into a single case study that reflects the realities of supporting APIs, deep links, and attribution systems in a SaaS environment.

The final result is a portfolio piece that shows I can move from a vague customer-reported symptom to a reproducible technical explanation, then turn that explanation into a clear resolution path for support, engineering, and customer stakeholders.