Skip to content
D-LIT Logo

Techniques & Models

Attribution models, segmentation, cohort analysis, and campaign optimization.

By D-LIT Team

Analytical techniques are the methods by which raw marketing data becomes decisions. The difference between a marketing team that iterates on intuition and one that iterates on evidence is largely a function of which techniques they apply, how rigorously they apply them, and how quickly they act on the results.

This article covers eight analytical techniques in depth, with emphasis on the areas most underserved by existing marketing analytics resources: multi-touch attribution (which most teams implement poorly), ABM analytics (which most resources ignore entirely), and the B2B versus ecommerce measurement distinction (which almost no competitor content addresses with specificity). For the underlying data infrastructure these techniques require, see Marketing Data Sources. For the KPIs these techniques inform, see Marketing KPIs. For how results surface in reporting, see Marketing Dashboards.


1. Multi-Touch Attribution

Attribution is the process of assigning credit for a business outcome - typically a closed deal or a pipeline opportunity - to the marketing touchpoints that preceded it. It is the most consequential analytical decision a marketing team makes, because it determines which channels receive budget and which are cut.

Most organizations use a model that is easy to implement but analytically wrong. Understanding the full spectrum of attribution models, their mathematical properties, and their practical limitations is the prerequisite for making sound budget allocation decisions.

The Attribution Model Spectrum

Last-Touch Attribution

Last-touch assigns 100% of credit to the final marketing touchpoint before conversion.

Touchpoints: Organic Search → Webinar → Email Nurture → Demo Request
Last-Touch Credit: Demo Request email = 100%

Last-touch is easy to implement (every analytics platform supports it by default) and produces clean, actionable data. Its fatal flaw is that it systematically undervalues every touchpoint that is not the last one. In a world where brand awareness, SEO content, and webinars create the conditions for demo requests to occur, last-touch assigns zero credit to those investments. Organizations using last-touch consistently underinvest in upper-funnel marketing.

First-Touch Attribution

First-touch assigns 100% of credit to the first touchpoint, typically the channel that introduced the prospect.

Touchpoints: Organic Search → Webinar → Email Nurture → Demo Request
First-Touch Credit: Organic Search = 100%

First-touch is useful for measuring channel performance at the top of the funnel. It answers the question: where are our best customers coming from at the beginning of their journey? Its flaw is the mirror of last-touch: it ignores everything that happened between introduction and conversion, making it impossible to evaluate nurture program effectiveness.

Linear Attribution

Linear attribution distributes credit equally across all touchpoints in the journey.

Touchpoints: Organic Search → Webinar → Email Nurture → Demo Request
Linear Credit: 25% / 25% / 25% / 25%

Linear is more defensible than single-touch models and avoids the systematic biases of first and last-touch. Its weakness is the equal-weight assumption: it treats a one-second banner ad impression identically to a ninety-minute webinar attendance, which does not reflect the likely difference in their influence on buyer intent.

Time-Decay Attribution

Time-decay assigns more credit to touchpoints that occurred closer in time to the conversion event, on the assumption that recency correlates with influence.

Touchpoints (days before conversion): Organic Search (90d) → Webinar (45d) → Email (14d) → Demo (1d)
Time-Decay Credit: ~5% / ~15% / ~30% / ~50%

Time-decay is intuitive and often produces results that align with sales team intuition. Its limitation is the assumption that recency equals influence, which may not hold. A prospect who attended a webinar ninety days ago and then searched for competitors before returning to request a demo may have been more influenced by the webinar than the proximity weights suggest.

Position-Based (U-Shaped) Attribution

Position-based attribution gives elevated credit to the first and last touchpoints, distributing the remainder across middle touchpoints.

Standard allocation: First touch = 40%, Last touch = 40%, Remaining touches share 20%

This model reflects a common intuition: the touchpoint that created awareness and the touchpoint that converted intent are disproportionately important. Middle touchpoints get some credit for nurture contribution. This is a reasonable compromise when data-driven models are not feasible.

W-Shaped Attribution

W-shaped attribution adds a third high-weight position: the touchpoint at which the lead became a marketing qualified lead or entered the CRM as an identified contact.

Standard allocation: First touch = 30%, MQL creation touch = 30%, Last touch = 30%, Remaining = 10%

W-shaped is particularly appropriate for B2B organizations where the MQL conversion is a meaningful and deliberate milestone, not merely an intermediate step.

Data-Driven (Algorithmic) Attribution

Data-driven attribution uses machine learning to assign fractional credit based on empirical analysis of which touchpoint combinations and sequences are statistically associated with conversion. Google Analytics 4 and Google Ads offer data-driven attribution natively.

Model input: Historical conversion path data (touchpoint sequences and outcomes)
Model output: Learned credit weights per touchpoint type and position

Data-driven attribution is the most analytically rigorous approach when data volume is sufficient. GA4 uses data-driven attribution as the default model, but it requires sufficient conversion volume - approximately four hundred conversions over twenty-eight days - to function reliably. Below that threshold, GA4 silently falls back to last-click attribution without notifying the user.

Attribution Windows

Every attribution model requires a defined window: how far back in time should touchpoints be considered? Common windows:

  • 7-day click (default for many paid platforms)
  • 30-day click / 1-day view
  • 90-day (appropriate for mid-length B2B sales cycles)
  • Full-journey (from first known touch to conversion, no time limit)

The appropriate window is determined by your sales cycle length. If your average B2B sales cycle is four months, a thirty-day attribution window misses most of the journey. Full-journey attribution is operationally complex but most accurate for long-cycle businesses.

Implementing Multi-Touch Attribution in Practice

The practical implementation challenge is joining touchpoint data (from your MAP and web analytics) with outcome data (from your CRM) at the person level. The process:

  1. Create a unified touchpoint table in your warehouse: one row per person per touchpoint, with touchpoint type, campaign, date, and channel.
  2. Join to CRM opportunity data: for each opportunity, identify all touchpoints on the associated contact and account records within the attribution window.
  3. Apply the credit allocation formula across touchpoints for each opportunity.
  4. Aggregate credit by channel, campaign, and time period to produce channel attribution reports.

This analysis should run on a weekly basis in your warehouse, with results surfaced in your marketing performance and CMO dashboards.

Attribution Model Comparison

Running multiple attribution models simultaneously and comparing their outputs is more informative than committing to any single model. The delta between last-touch and multi-touch attribution reveals which channels are systematically under-credited by your current model - typically brand, content, and webinars for B2B organizations.


2. Funnel Conversion Analysis

Funnel analysis measures the conversion rate at each stage of the marketing and sales process, identifying where the largest losses occur and where investment in improvement produces the highest return.

The B2B Funnel Model

A standard B2B marketing funnel has five to seven stages:

Awareness (Visitors)

Interest (Leads - form submissions)

Qualification (MQLs - scored and qualified leads)

Sales Acceptance (SQLs - accepted by sales)

Opportunity (Active pipeline)

Closed-Won (Customers)

Stage conversion rates are calculated for each transition:

Visitor-to-Lead Rate = Leads / Visitors × 100
Lead-to-MQL Rate = MQLs / Leads × 100
MQL-to-SQL Rate = SQLs / MQLs × 100
SQL-to-Opportunity Rate = Opportunities / SQLs × 100
Opportunity-to-Closed Rate = Closed Won / Opportunities × 100
End-to-End Rate = Customers / Visitors × 100

The product of all stage rates gives the end-to-end conversion rate:

Overall Rate = Visitor-to-Lead × Lead-to-MQL × MQL-to-SQL × SQL-to-Opp × Opp-to-Close
Example: 3% × 25% × 30% × 65% × 25% = 0.037%

This means approximately 1 in 2,700 website visitors becomes a customer. Understanding this number makes top-of-funnel volume investment requirements immediately clear.

Funnel Analysis by Segment

Aggregate funnel rates hide segment-level variation that is actionable. Analyze conversion rates by:

  • Traffic source and campaign
  • ICP segment (company size, industry, geography)
  • Lead source (inbound vs. outbound vs. partner)
  • Content type consumed before conversion

A segment with high visitor-to-lead conversion but low MQL-to-SQL suggests the segment produces leads that do not match ICP criteria - a targeting problem. A segment with low visitor-to-lead but high downstream rates suggests high-intent traffic that needs better conversion path optimization.

Funnel Velocity

Beyond conversion rates, funnel velocity measures time-in-stage for each transition:

Average Days Visitor-to-Lead → Average Days Lead-to-MQL → ...

Velocity analysis identifies where prospects stall. A long average time at MQL-to-SQL often indicates a handoff process problem rather than a quality problem - qualified leads sitting in a queue, waiting for SDR outreach.


3. A/B Testing and Experimental Design

A/B testing is the practice of running controlled experiments to determine whether a change to a marketing asset or program produces a statistically significant improvement in a target metric. Done rigorously, it replaces opinion with evidence. Done carelessly, it produces misleading results that lead to bad decisions.

Statistical Foundations

Hypothesis formation: Every test must begin with a specific hypothesis that predicts a directional change:

H0 (null): The new subject line produces no difference in open rate
H1 (alternative): The new subject line produces a higher open rate

Sample size calculation: Running a test without sufficient sample size is one of the most common A/B testing errors. Use a sample size calculator with:

Required Inputs:
- Baseline conversion rate (current performance)
- Minimum detectable effect (smallest improvement that would be actionable)
- Statistical significance level (typically 95%, α = 0.05)
- Statistical power (typically 80%, β = 0.20)

For example: testing email subject lines with a baseline open rate of 22% and a minimum detectable effect of 2 percentage points requires approximately 3,800 recipients per variant.

Significance threshold: A result is statistically significant when the p-value falls below your predetermined α threshold (typically 0.05). This means there is a less than 5% probability that the observed difference is due to chance. Statistical significance does not mean practical significance - a statistically significant 0.1% improvement in conversion rate may not justify the engineering cost of implementation.

Early stopping: Peeking at results during a test and stopping when significance appears inflates false positive rates substantially. Run tests to completion. Define the sample size and duration before starting, and do not stop early regardless of interim results.

What to Test

High-impact A/B test subjects in marketing:

  • Email: Subject line, sender name, send time, CTA copy, personalization tokens
  • Landing page: Headline, value proposition copy, form length, CTA button text and color, social proof placement
  • Paid ad: Headline, description, image or video creative, audience segment
  • Lead nurture flow: Message sequence, timing between touches, content offers

Multivariate Testing

When multiple elements interact, multivariate testing (testing combinations of variables simultaneously) is more efficient than sequential A/B tests but requires proportionally larger sample sizes. Reserve multivariate testing for high-traffic pages where sample size is not a constraint.


4. Channel Mix Optimization

Channel mix optimization is the process of allocating marketing budget across channels to maximize total pipeline or revenue output, accounting for saturation effects at each channel level.

Marginal Return Analysis

Every marketing channel has a marginal return curve: each additional dollar invested produces diminishing incremental return as the channel becomes saturated. The optimal budget allocation sets marginal return equal across all channels - the point where shifting a dollar from any channel to any other channel does not improve total output.

Optimal condition: MR(Channel A) = MR(Channel B) = MR(Channel C) = ...
Where MR = Marginal Revenue per dollar of additional spend

In practice, measuring marginal returns requires running spend level experiments - increasing investment in one channel while holding others constant - and measuring the pipeline or revenue response. This is time-consuming but more accurate than top-line ROMI comparisons, which do not account for saturation.

Marketing Mix Modeling (MMM)

For organizations with sufficient historical spend and revenue data (typically two or more years of weekly data across multiple channels), Marketing Mix Modeling uses econometric regression to decompose revenue into contributions from each marketing channel, organic factors, and seasonal effects.

Revenue = β₀ + β₁(Paid_Search_Spend) + β₂(Paid_Social_Spend) + β₃(Email_Contacts)
        + β₄(Organic_Search_Sessions) + β₅(Seasonality) + ε

MMM captures effects that attribution models miss: cross-channel halo effects (brand spend lifting paid search efficiency), lag effects (brand investment in Q1 influencing pipeline in Q3), and diminishing returns curves per channel.

MMM is complementary to, not a replacement for, multi-touch attribution. Attribution provides granular campaign-level insight; MMM provides macro-level budget allocation guidance.

Share of Voice and Competitive Spend Analysis

Channel mix decisions should incorporate competitive intelligence. If competitors are saturating paid search for your primary keywords, the marginal return on incremental paid search investment decreases. Redirecting budget toward channels where share of voice is lower (LinkedIn, content, webinars) may produce better returns.


5. Account-Based Marketing (ABM) Analytics

ABM analytics is a measurement discipline that most marketing analytics resources do not cover. It is also one of the fastest-growing practices in B2B marketing, making the gap consequential. ABM treats the account - not the individual lead - as the unit of analysis.

Why ABM Requires Different Measurement

Standard marketing analytics counts leads and MQLs. ABM asks different questions:

  • Is this target account engaging with our marketing?
  • How many people within the account have we reached?
  • Which accounts in our target list are showing intent signals?
  • What is the pipeline coverage generated from our target account list?

None of these questions can be answered by lead-level analytics alone. They require account-level data aggregation, which is why ABM analytics requires both MAP data (individual activity) and CRM data (account records) joined at the account level.

Core ABM Metrics

Account Coverage:

Account Coverage = Accounts with at Least 1 Known Contact / Total Target Accounts × 100

Measures the proportion of target accounts that have at least one identified contact in the database. Low coverage means marketing programs cannot reach most target accounts.

Account Engagement Score: An aggregate of individual contact interactions rolled up to the account level:

Account Engagement = Σ(Weighted Contact Activity Scores) / Days in Period
Activity weights: Website visit = 1, Content download = 3, Webinar attendance = 5, Demo request = 10

Pipeline Coverage from Target Account List:

Pipeline Coverage = Opportunities from Target Accounts / Target ARR Goal × 100

The ratio of open pipeline sourced from ABM target accounts to the revenue goal allocated to the ABM program.

Opportunity Influence Rate:

Influence Rate = Target Account Opportunities with Marketing Touches / Total Target Account Opportunities × 100

Measures what percentage of opportunities in target accounts had at least one marketing touchpoint - the foundation for multi-touch attribution in an ABM context.

Progression Rate:

Progression Rate = Accounts That Moved Forward in Funnel / Total Accounts in Funnel Stage × 100

Measured across all funnel stages (aware, engaged, MQL, opportunity, customer), progression rate shows whether ABM programs are moving accounts through the funnel or generating stagnant engagement.

ABM Data Infrastructure

ABM analytics requires account-level data joining that most standard marketing data stacks do not perform automatically:

  1. CRM accounts must be maintained with consistent naming and DUNS or firmographic identifiers to enable de-duplication of accounts that appear under multiple names.
  2. Contact activities in the MAP must be associated to their parent account in Salesforce via the standard contact-to-account relationship.
  3. Intent data platforms (Bombora, G2 Buyer Intent, LinkedIn Audience Insights) provide account-level intent signals that supplement behavioral data from owned channels.
  4. LinkedIn Account Engagement data (available through Campaign Manager) provides impressions and engagement data at the account level for paid programs.

ABM Attribution

Attribution in ABM is inherently multi-threaded. A single opportunity may involve four to eight stakeholders, each touched by different marketing programs. Standard person-level attribution undercounts marketing’s influence in ABM accounts because it attributes only to the contacts formally associated with the opportunity.

The more accurate approach aggregates all marketing touches from any contact at the account in the attribution window, not only those formally linked to the opportunity in CRM. This typically requires custom SQL logic in the warehouse joining account-level touchpoints to opportunity data.


6. B2B Versus Ecommerce Marketing Analytics

The measurement frameworks appropriate for B2B and ecommerce marketing differ substantially. Most marketing analytics content is written for one context without acknowledging the other. The table below maps the key differences.

DimensionB2BEcommerce
Purchase decision timelineWeeks to monthsHours to days
Decision-making unitMultiple stakeholdersIndividual or household
Primary conversion eventMQL, demo request, opportunity creationTransaction (purchase)
Attribution window30-180 days7-30 days
Revenue attributionRequires CRM closed-loopTransaction data available immediately
Primary attribution modelMulti-touch (W or full-path)Data-driven or last-click
CLV calculation horizon2-5+ years12-24 months
Lead scoring relevanceHigh (central to pipeline process)Low (purchase or no purchase)
ABM relevanceHighLow (account-level targeting via ad platforms only)
ROAS reliabilityLow (long cycle obscures attribution)High (short cycle, direct attribution)
Key funnel stagesAwareness → Lead → MQL → SQL → Opportunity → CustomerAwareness → Session → PDP View → Cart → Purchase → Repeat Purchase

Ecommerce-Specific Techniques

Customer Cohort Analysis: Group customers by acquisition month and track their purchase frequency, average order value, and cumulative revenue over time. Cohort analysis reveals whether acquisition quality is improving or declining, and at what point in the customer lifecycle churn accelerates.

Cohort Revenue Retention = Month N Revenue from Cohort / Month 0 Revenue from Cohort × 100

Cart Abandonment Analysis: Track the conversion rate from “add to cart” to purchase, segmented by traffic source, device, and product category. Cart abandonment rate is typically 70-75% and varies by traffic source - direct traffic typically converts at higher rates than paid traffic.

Repeat Purchase Rate:

Repeat Purchase Rate = Customers with 2+ Purchases / Total Customers × 100

For most ecommerce categories, 30-40% repeat purchase rate within twelve months indicates healthy retention.

B2B-Specific Techniques

Sales Cycle Length Analysis: Track the average and distribution of days from opportunity creation to close-won, segmented by deal size, segment, and source. Long tail distributions (many deals closing in 14 days, many taking 180+) indicate bimodal buyer populations that may require different nurture sequences.

Pipeline Velocity:

Pipeline Velocity = (Number of Opportunities × Win Rate × Average Deal Value) / Sales Cycle Length

Pipeline velocity expresses how quickly deals move through the pipeline and how much revenue the pipeline generates per unit of time. It integrates volume, quality, and efficiency into a single metric for executive reporting.

ICP Fit Scoring: Segmenting pipeline and customers by ICP fit criteria (company size, industry, use case, tech stack) and correlating fit scores with win rate, ACV, and retention creates the empirical foundation for ICP refinement and account list selection.


7. Content Performance Analytics

Content marketing produces assets - blog posts, guides, webinars, videos - that generate traffic and leads over extended periods. Measuring content performance requires both traffic attribution (how much traffic does this content drive) and pipeline attribution (how much pipeline did this content influence).

Content Traffic Attribution

Organic search attribution tracks the keyword rankings and organic traffic generated by each piece of content. Tools: Google Search Console (authoritative for owned domains), Ahrefs and SEMrush (for competitor comparison and opportunity identification).

Key content traffic metrics:

  • Organic sessions per piece (trailing 30/90/365 days)
  • Keyword rankings and position changes
  • Featured snippet ownership
  • Internal link equity flow (content linking to high-converting pages)

Content Pipeline Attribution

The more commercially relevant metric is pipeline attribution: which content pieces appear in the paths of leads who became customers?

Content Pipeline Influence = Opportunities where contact visited content page in attribution window
Content Pipeline Influence Rate = Influenced Opportunities / Total Opportunities × 100

This analysis requires joining GA4 content page session data with CRM opportunity data via the contact’s session history - the same data join that powers multi-touch attribution.

Content return on investment:

Content ROI = ((Pipeline Influenced × Win Rate × ACV) - Content Production Cost) / Content Production Cost × 100

Long-form content that ranks for competitive keywords frequently produces the highest ROI when measured over twelve to twenty-four month horizons, because production costs are fixed and organic traffic compounds.


8. Predictive Lead Scoring

Traditional lead scoring assigns points based on demographic fit and behavioral signals using rules defined by marketing operations. Predictive lead scoring uses machine learning to learn which characteristics and behaviors correlate with conversion, assigning scores based on empirical patterns rather than intuitive rules.

Predictive Scoring Model Architecture

Training data: Historical leads with binary outcome labels (converted to customer = 1, did not convert = 0), along with all available features at the time of lead creation:

Features (examples):
- Firmographic: company size, industry, geography, technology stack
- Demographic: job title, seniority, department
- Behavioral: pages visited, content downloaded, email engagement score, time-on-site
- Source: acquisition channel, campaign, referrer

Model types: Logistic regression, gradient boosting (XGBoost, LightGBM), and random forest models are commonly used. The model outputs a probability score (0-1) that each new lead will convert, which maps to a scoring tier (A, B, C, D or 1-100).

Validation: Predictive models must be evaluated on held-out test data, not training data. Use AUC-ROC (Area Under the Receiver Operating Characteristic curve) as the primary evaluation metric. An AUC of 0.5 is random; 0.7-0.8 is acceptable; above 0.8 is strong for most marketing lead scoring applications.

Model decay: Lead scoring models degrade over time as market conditions, ICP definitions, and product positioning change. Retrain models quarterly using recent data and monitor score distribution drift as an early warning signal.

Operationalizing Predictive Scoring

Predictive scores are most valuable when they trigger sales development actions. Integration pattern:

  1. Model runs nightly in warehouse, scoring all new leads and refreshing existing scores.
  2. Scores sync to CRM (Salesforce) as a lead/contact field via API.
  3. SDR workflow rules prioritize high-score leads for immediate outreach.
  4. Marketing automation routes high-score leads to accelerated nurture sequences.

The business impact of predictive scoring is measured as: improvement in SQL-to-Opportunity conversion rate for scored versus unscored leads, and reduction in sales time spent on low-probability leads. Organizations that have operationalized predictive scoring typically report 15-35% improvement in MQL-to-SQL conversion rates.

Get More from D-LIT

Ready to transform your analytics capabilities? Talk to our team about how D-LIT can help your organisation make better, data-driven decisions.

Get in Touch