Introduction
Modern growth and attribution platforms do not fail in obvious ways. A customer may report that campaign performance looks wrong, that installs are undercounted, or that a deep link opens the wrong destination. On the surface, those sound like dashboard issues. In reality, the root cause may live inside a redirect chain, a malformed API payload, a missing device identifier, duplicate client retries, or a mismatch between raw events and attribution logic downstream.
I built this lab to mirror that exact support engineering workflow. It includes a mock event-tracking pipeline, deep link flow, SQL investigation layer, and visual analysis dashboards designed to explain both what went wrong and how I found it. Instead of presenting a generic analytics notebook, this project focuses on the technical investigation process that support engineers use to move from customer symptom to reproducible root cause.
The project simulates a simplified but realistic attribution pipeline so that support issues can be reproduced, traced, and documented clearly.
Click
Campaign link stores metadata and device context.
Redirect
Deep link or fallback page behavior is executed.
Open / Install
User action is logged and tied back to prior click data.
Conversion
Revenue event is attributed or fails due to data loss.
Sample Support Queue
Missing campaign attribution
Clicks are visible in logs, but conversions show up as unattributed in reporting.
Deep link opens fallback page
Users land on mobile web instead of the in-app destination after ad clicks.
Duplicate conversion spikes
Revenue doubles in reporting due to client retry behavior without deduplication.
Timezone reporting confusion
Customer thinks events are missing, but the issue is delayed bucketing across time zones.
Visual Overview of the Problem Space
Before diving into code and ticket notes, I wanted the project to visually tell the story of support investigations. The charts below are not just decorative. They are the kinds of visual diagnostics that help explain what the customer sees, what the raw system shows, and where the attribution chain actually breaks.
What I Built
The lab combines a lightweight API, a simulated redirect service, a small event warehouse, and an investigation layer built with SQL. The intent was to reproduce realistic technical support scenarios in a controlled environment so that I could diagnose each issue step by step and document the findings as if I were handing them to customers, product teams, or engineers.
Core System Components
- Mock tracking API for click, install, open, and conversion events
- Redirect logic that preserves or intentionally drops campaign parameters
- SQLite/Postgres event tables for raw and derived event records
- Support runbooks for repeatable troubleshooting workflows
- Ticket-ready summaries for customer-safe communication
Support Skills Demonstrated
- SQL-based data reconciliation and event tracing
- API debugging and payload validation
- Chrome Developer Tools network inspection
- Root cause analysis and bug isolation
- Clear communication to both technical and non-technical audiences
Case Study: Missing Conversion Attribution
One of the strongest scenarios in the lab involved a customer complaint that campaign clicks were visible, but conversions were not being attributed properly. At first glance, the dashboard made it look like campaign performance had suddenly deteriorated. The deeper investigation showed something much more specific: the click event was being captured correctly, but the downstream conversion payload no longer contained the linking field required to preserve attribution through the redirect and app-open process.
To diagnose this, I validated raw clicks, checked whether install/open events carried the expected device identifiers, and compared the conversion payload structure with known-good events. The problem ended up being a session handoff issue introduced during the redirect phase.
SELECT c.click_id, c.campaign_name, c.device_id, c.click_timestamp, v.conversion_timestamp, v.revenue FROM click_events c LEFT JOIN conversion_events v ON c.device_id = v.device_id WHERE v.conversion_timestamp IS NULL ORDER BY c.click_timestamp DESC;
This query highlights click records that never generated a downstream attributed conversion. From there, I compared successful versus failed sessions and found that the broken flow was dropping the campaign context before the conversion event was posted.
Case Study: Duplicate Events and Inflated Revenue
Another scenario reproduced a spike in conversions that looked positive at first, but was actually caused by duplicate conversion submissions. The underlying problem was a client-side retry pattern with no proper deduplication rule. From a support perspective, this is a great example of why engineers and customers can initially interpret the same data in opposite ways: the dashboard seems to show improvement, while support has to determine whether the increase reflects real customer behavior or technical duplication.
I grouped events by device, campaign, and event type to identify records that should not repeat within the same session window. The resulting investigation clearly showed duplicate conversion clusters tied to retry behavior rather than legitimate user activity.
SELECT device_id, campaign_name, event_name, COUNT(*) AS duplicate_count FROM conversion_events GROUP BY device_id, campaign_name, event_name HAVING COUNT(*) > 1 ORDER BY duplicate_count DESC;
Why the Visual Layer Matters
Technical support is not just about finding the answer. It is also about explaining the answer. The visualizations in this project were designed to help different audiences understand the same issue from their own perspective. A customer may need reassurance that the issue has been isolated. A product manager may want to see where in the funnel the failure begins. An engineer may care most about the consistency of identifiers across event tables. Good support work often means translating one technical reality into several different explanations without losing accuracy.
How I Structured the Support Workflow
Each scenario in the lab follows a repeatable support engineering process:
- Reproduce the issue in a controlled environment
- Inspect redirect behavior and network traffic in the browser
- Validate request payloads and response patterns
- Compare raw tables against reporting logic with SQL
- Determine whether the issue is implementation error, product defect, expected behavior, or data latency
- Write an internal summary and a customer-facing response
That workflow is what makes this project relevant to technical support engineering roles. It shows not only that I can analyze data, but that I can diagnose technical failures, document the resolution path, and communicate clearly across functions.
Tech Stack and Investigation Toolkit
The implementation is intentionally practical rather than overengineered. The value of the project is in the support workflow, not in flashy infrastructure. I focused on tools that let me move quickly from reproduction to evidence to documented resolution.
Conclusion
This project was designed to look and feel like the work of a real support engineer, not just an analyst. It combines debugging, SQL investigation, browser inspection, event tracing, and documentation into a single case study that reflects the realities of supporting APIs, deep links, and attribution systems in a SaaS environment.
The final result is a portfolio piece that shows I can move from a vague customer-reported symptom to a reproducible technical explanation, then turn that explanation into a clear resolution path for support, engineering, and customer stakeholders.