Are your top-selling merchandise making or breaking what you are promoting?
It’s terrifying to assume your whole income would possibly collapse if one or two merchandise fall out of favor. But spreading too skinny throughout a whole bunch of merchandise typically results in mediocre outcomes and brutal worth wars.
Uncover how a 6-year Shopify case research uncovered the proper stability between focus and diversification.
Why trouble?
Understanding focus in your product portfolio is greater than merely an mental train; it has a direct impression on essential enterprise selections. From stock planning to advertising spend, understanding how your income is distributed amongst items impacts your method.
This submit walks by means of sensible methods for monitoring focus, explaining what these measurements really imply and methods to get helpful insights out of your information.
I’ll take you thru elementary metrics and superior evaluation, together with interactive visualisations that carry the info to life.
I’m additionally sharing chunks of R code used on this evaluation. Use it immediately or adapt the logic to your most well-liked programming language.
Taking a look at market evaluation or funding idea, we regularly deal with focus — how worth is distributed throughout completely different parts. In e-commerce, this interprets right into a elementary query: How a lot of your income ought to come out of your prime merchandise?
Is it higher to have a number of sturdy sellers or a broad product vary? This isn’t only a theoretical query …
Having most of your income tied to few merchandise means your operations are streamlined and centered. However what occurs when market preferences shift? Conversely, spreading income throughout a whole bunch of merchandise may appear safer, but it surely typically means you lack any actual aggressive benefit.
So the place’s the optimum level? Or slightly what’s the optimum vary, and the way varied ratios describe it.
What makes this evaluation notably precious is that it’s primarily based on actual information from a enterprise that stored increasing its product vary over time.
On Datasets
This evaluation was performed for an actual US-based e-commerce retailer — one among our shoppers who kindly agreed to share their information for this text. The info spans six years of their progress, giving us a wealthy view of how product focus evolves as enterprise matures.
Whereas working with precise enterprise information provides us real insights, I’ve additionally created an artificial dataset in one of many later sections. This small, synthetic dataset helps illustrate the relationships between varied ratios in a extra managed setting — exhibiting patterns “relying on fingers”.
To be clear: this artificial information was created solely from scratch and solely loosely mimics normal patterns seen in actual e-commerce — it has no direct connection to our shopper’s precise information. That is completely different from my earlier article, the place I generated artificial information primarily based on actual patterns utilizing Snowflake performance.
Information Export
The primary evaluation attracts from actual information, however that small synthetic dataset serves an essential goal — it helps clarify relationships between varied ratios in a method that’s simple to understand. And belief me, having such a micro dataset with clear visuals is available in actually helpful when explaining advanced dependencies to stakeholders 😉
The uncooked transaction export from Shopify accommodates every part we require, however we should prepare it correctly for focus evaluation. The info accommodates the entire merchandise for every transaction, however the date is simply in a single row per transaction, thus we should propagate it to all merchandise whereas retaining the transaction id. Most likely not for the primary iteration of the research, but when we wish to fine-tune it, we must always contemplate methods to deal with reductions, returns, and so forth. Within the case of international gross sales, conduct a world and country-specific research.
We’ve a product title and an SKU, each of which ought to adhere to some naming conference and logic when coping with variants. If now we have a grasp catalogue with all of those descriptions and codes, we’re very lucky. When you have it, use it, however examine it to the ‘floor fact’ with precise transaction information.
Product Variants
In my case, the product names have been structured with a base title and a variant separated by a splash. Quite simple to make use of, divided into essential product and variants. Exceptions? In fact, they’re all the time current, particularly when coping with 6 years of extremely profitable ecommerce information:). As an example, some names (e.g. “All-purpose”) included a splash, whereas others didn’t. Then, some did have variants, whereas some didn’t. So anticipate for some tweaks right here, however this can be a essential stage.
For those who’re questioning why we have to exclude variations from focus evaluation, the determine above illustrates it clearly. The values are significantly completely different, and we might anticipate radically completely different outcomes if we analysed focus with variants.
The evaluation is predicated on transactions, counting variety of merchandise with/with out variants in a given month. But when now we have a lot of variants, not all of them will likely be current in one-month transactions. Sure, that’s appropriate — so allow us to contemplate a bigger time vary, one 12 months.
I calculated the variety of variants per base product in a calendar 12 months primarily based on what now we have in transactions. The variety of variants per base product is split into a number of bins. Let’s take the 12 months 2024. The plot reveals that now we have considerably round 170 base objects, with lower than half having just one variant (mild inexperienced bar). Nonetheless, the opposite half had multiple model, and what’s noteworthy (and, I consider, non-obvious, until you’re employed in attire ecommerce) is that now we have merchandise with a extremely massive variety of variations. The black bin accommodates objects that are available 100 or extra completely different variants.
For those who guessed that they have been growing their choices by introducing new merchandise whereas holding outdated ones accessible, you might be appropriate. However wouldn’t or not it’s fascinating to know whether or not the variations stem from heritage or new merchandise? What if we simply included merchandise launched within the present 12 months? We could test it through the use of the date of product introduction slightly than transactions. As a result of our solely dataset is a transaction dump, the primary transaction for every product is taken because the introduction date. And for every product, we take all variations that appeared in transactions, with no time constraints (from product introduction to essentially the most present file).
Now let’s have these two plots aspect by aspect for straightforward comparability. Taking transactions dates now we have extra merchandise in annually, and the distinction grows — since there are additionally transactions with merchandise launched beforehand. No suprises right here, as anticipated. For those who have been questioning why information for 2019 differ — good catch. In reality, store began operation in 2018, however I eliminated these few preliminary months; nonetheless, it’s their impression what makes the distinction in 2019.
Merchandise variants and it’s impression on income shouldn’t be our focus on this article. However as it’s typically in actual evaluation, there are ‘branching’ choices, as we progress, even within the preliminary part. We haven’t even completed information preparation, and it’s already getting fascinating.
Understanding the product construction is essential for conducting significant focus analyses. Now that our information is appropriately formatted, we are able to study precise focus measurements and what they reveal about supreme portfolio construction. Within the following half, we’ll have a look at these measurements and what they imply for e-commerce companies.
With regards to figuring out focus, economists and market analysts have performed the heavy lifting for us. Over many years of analysis into markets, competitiveness, and inequality, they’ve produced highly effective analytical strategies which have confirmed helpful in quite a lot of sectors. Moderately than growing novel metrics for e-commerce portfolio evaluation, we are able to use present time-tested strategies.
Let’s see how theoretical frameworks can make clear sensible e-commerce questions.
Herfindahl-Hirschman Index
HHI (Herfindahl-Hirschman Index) might be the commonest option to measure focus. Regulators use it to test if a market isn’t turning into too concentrated — they take percentages of every firm’s market share, sq. them, and add up. Easy as that. The end result may be anyplace from almost 0 (many small gamers) to 10,000 (one firm takes all of it).
Why use HHI for e-commerce portfolio evaluation? The logic is easy — as an alternative of firms competing in a market, now we have merchandise competing for income. The mathematics works precisely the identical method — we take every product’s share of whole income, sq. it, and sum up. Excessive HHI means income is determined by few merchandise, whereas low HHI reveals income is unfold throughout many merchandise. This offers us a single quantity to trace portfolio focus over time.
Pareto
Who has not heard of Pareto’s guidelines? In 1896, Italian economist Vilfredo Pareto noticed that 20% of the inhabitants held 80% of Italy’s land. Since then, this sample has been present in quite a lot of fields, together with wealth distribution and retail gross sales.
Whereas popularly known as the “80/20 rule,” the Pareto precept shouldn’t be restricted to those figures. We will use any x-axis criterion (for instance, the highest 30% of merchandise) to find out the suitable y worth (income contribution). The Lorenz curve, fashioned by linking these areas, offers a whole image of focus.
The chart above reveals what number of merchandise do we have to obtain sure income share (of the month-to-month income). I took arbitrarily cuts at .2, .3, .5, .8, .95, and naturally additionally together with 1 — which suggests whole variety of merchandise, contributing to 100% of income in a given month.
Lorenz curve
If we kind merchandise by their income contribition, and chart the road, we get Lorenz curve. On each axis now we have percentages, of merchandise and their reveue share. I case of completely uniform income distribution, we’d have a straight line, whereas in case of “excellent focus”, very steep curve, climbing near 100% income, after which quickly turning proper, to incorporate some residual income from different merchandise.
It’s fascinating to see that line, however usually it would look fairly comparable, like a “bended stick”. So allow us to now examine these strains for few earlier months, and in addition few years again (sticking to October). The month-to-month strains are fairly comparable, and in case you assume — it will be good to have some interactivity on this plot, you might be completely proper. The yearly comparability reveals extra variations (we nonetheless have month-to-month information, taking October in annually), and that is comprehensible, since these measurements are extra distant in time.
So we do see variations between the strains, however can’t we quantify them one way or the other, to not rely solely on visible similarity? Positively, and there’s a ratio for that — Gini Ratio. And by the way in which, we could have various ratios in subsequent chapters.
Gini Ratio
To translate form of Lorenz curve into numeric worth, we are able to use Gini ratio — outlined as a ratio between two areas, above and beneath the equality line. On a plot beneath it’s a ratio between darkish and lightweight blue areas.
Allow us to then visualize for 2 durations — October 2019, and October 2024, very same durations, as now we have on one of many plots earlier than.
As soon as now we have good understanding, with visuals, how the Gini ratio is calculated, let’s plot it, over the entire interval.
I take advantage of R for evaluation, so I’ve Gini ratio simply accessible (in addition to different ratios, which I’ll present later). The preliminary information desk (x3a_dt) accommodates income per product, per 30 days. The ensuing one has Gini ratio per 30 days.
#-- calculate Gini ratio, month-to-month
library(information.desk, ineq)
x3a_ineq_dt <- x3a_dt[, .(gini = ineq::ineq(revenue, type = "Gini")), month]
Good now we have all these packages for heavy lifting. The mathematics behind shouldn’t be tremendous sophisticated, however our time is valuable.
The plot beneath reveals the results of calculations.
I haven’t included a smoothing line, with its confidence interval channel, since we do not need measurement factors, however the results of Gini calculation, with its personal errors distribution. To be very strict and exact on math, we’d must calculate the arrogance interval, and primarily based on that plot smoothed line. The outcomes are beneath.
Since we don’t use immediately statistical significance of calculated ratio, this tremendous strict method is just a little bit an overkill. I haven’t performed it whereas charting pattern line for HHI, nor will do in subsequent plots. However it’s good to pay attention to this nuance.
We’ve seen thus far two ratios — HHI and Gini, and they’re removed from being equivalent. Lorenz curve nearer to diagonal signifies extra uniform distribution, which is what now we have for October 2019, however the HHI is greater, than for 2024, indicating extra focus in 2019. Possibly I made a mistake in calculations, even worse, early on throughout information preparation? That may be actually unlucky. Or the info is okay, however we’re battling correct interpretation?
I’ve very often moments of such doubts, particularly when shifting with the evaluation actually fast. So how can we address that, tightening grip on information and our understanding of dependencies? Keep in mind, that no matter evaluation you do, there’s all the time first time. And very often we do not need a luxurious of ‘leisure’ analysis, it’s extra typically already work for a Shopper (or a superior, stakeholder, whoever requested it, even ourselves, whether it is our initiative).
We have to have a very good understanding of methods to interpret all these ratios, together with dependencies between them. For those who plan to current your outcomes to others, questions listed here are assured, so higher to be nicely ready. We will work with an present dataset, or we are able to generate a small set, the place it will likely be simpler to catch dependencies. Allow us to comply with the latter method.
Allow us to begin with making a small dataset,
library(information.desk)#-- Create pattern income information
income <- record(
"2021" = rep(15, 10), # 10 values of 15
"2022" = c(rep(100, 5), rep(10, 25)), # 5 values of 100, 25 values of 10
"2023" = rep(25, 50), # 50 values of 25
"2024" = c(rep(100, 30), rep(10, 70)) # 30 values of 100, 70 values of 10
)
combining it into a knowledge.desk.
#-- Convert to information.desk in a single step
x_dt <- information.desk(
12 months = rep(names(income), sapply(income, size)),
income = unlist(income)
)
A fast overview of the info.
It appears now we have what we wanted — a easy dataset, however nonetheless fairly real looking. Now we’re continuing with calculations and charts, much like what we had for an actual dataset earlier than.
#-- HHI, Gini
xh_dt <- x_dt[, .(hhi = ineq::Herfindahl(revenue),
gini = ineq::Gini(revenue)), year]
#-- Lorenz
xl_dt <- x_dt[order(-revenue), .(
cum_prod_pct = seq_len(.N)/.N,
cum_rev_pct = cumsum(revenue)/sum(revenue)), year]
And rendering plots.
These charts assist rather a lot in understanding ratios, relations between them and to information. It’s all the time a good suggestion to have such micro evaluation, for ourselves and for stakeholders — as ‘again pocket’ slides, and even sharing them upfront.
Nerdy element — methods to barely shift the road, so it doesn’t overlap, and add labels inside a plot? Render a plot, after which make guide effective tuning, anticipating a number of iterations.
#-- shift the road
xl_dt[year == "2021", `:=` (cum_rev_pct = cum_rev_pct - .01)]
For labelling I take advantage of ggrepel, however as a default, it would label all of the factors, whereas we want just one per line. An as well as deciding which one, for good wanting chart.
#-- resolve which factors to label
labs_key2_dt <- information.desk(
12 months = c("2021", "2022", "2023", "2024"), place = c(4, 5, 25, 30))#-- set keys
record(xl_dt, labs_key2_dt) |> lapply(setkey, 12 months)
#-- be part of
label_positions2 <- xl_dt[
labs_key2_dt, on = .(year), # join on 'year'
.SD[get('position')], # Use get('place') to reference the place from labs_key_dt
by = .EACHI] # for annually
Render the plot.
#-- render plot
plot_22b <- xl_dt |>
ggplot(aes(cum_prod_pct, cum_rev_pct, shade = 12 months, group = 12 months, label = 12 months)) +
geom_line(linewidth = .2) +
geom_point(alpha = .8, form = 21) +
theme_bw() +
scale_color_viridis_d(choice = "H", start = 0, finish = 1) +
ggrepel::geom_label_repel(
information = label_positions2, drive = 10,
field.padding = 2.5, level.padding = .3,
seed = 3, course = "x") +
... further styling
I started with HHI, the Lorenz curve, and the accompanying Gini ratios, as they seemed to be good beginning factors for focus and inequality measurements. Nonetheless, there are quite a few completely different ratios used to outline distributions, whether or not for inequality or basically. It’s unlikely that we’d make use of all of them without delay, subsequently choose the subset that gives essentially the most insights in your particular problem.
With a correct construction of a dataset, it’s fairly easy to calculate them. I’m sharing code snippets, with a number of ratios calculated month-to-month. We use a dataset, we have already got — month-to-month income per product (base merchandise, excluding variants).
Beginning with ratios from the ineq
package deal.
#---- inequality ----
x3_ineq_dt <- x3a_dt[, .(
# Classical inequality/concentration measures
gini = ineq::ineq(revenue, type = "Gini"), # Gini coefficient
hhi = ineq::Herfindahl(revenue), # Herfindahl-Hirschman Index
hhi_f = sum((rev_pct*100)^2), # HHI - formula
atkinson = ineq::ineq(revenue, type = "Atkinson"), # Atkinson index
theil = ineq::ineq(revenue, type = "Theil"), # Theil entropy index
kolm = ineq::ineq(revenue, type = "Kolm"), # Kolm index
rs = ineq::ineq(revenue, type = "RS"), # Ricci-Schutz index
entropy = ineq::entropy(revenue), # Entropy measure
hoover = mean(abs(revenue - mean(revenue)))/(2 * mean(revenue)), # Hoover (Robin Hood) index
Diustribution shape and top/bottom shares and ratios.
# Distribution shape measures
cv = sd(revenue)/mean(revenue), # Coefficient of Variation
skewness = moments::skewness(revenue), # Skewness
kurtosis = moments::kurtosis(revenue), # Kurtosis# Ratio measures
p90p10 = quantile(revenue, 0.9)/quantile(revenue, 0.1), # P90/P10 ratio
p75p25 = quantile(revenue, 0.75)/quantile(revenue, 0.25), # Interquartile ratio
palma = sum(rev_pct[1:floor(.N*.1)])/sum(rev_pct[floor(.N*.6):(.N)]), # Palma ratio
# Focus ratios and shares
top1_share = max(rev_pct), # Share of prime product
top3_share = sum(head(kind(rev_pct, lowering = TRUE), 3)), # CR3
top5_share = sum(head(kind(rev_pct, lowering = TRUE), 5)), # CR5
top10_share = sum(head(kind(rev_pct, lowering = TRUE), 10)), # CR10
top20_share = sum(head(kind(rev_pct, lowering = TRUE), ground(.N*.2))), # High 20% share
mid40_share = sum(kind(rev_pct, lowering = TRUE)[floor(.N*.2):floor(.N*.6)]), # Center 40% share
bottom40_share = sum(tail(kind(rev_pct), ground(.N*.4))), # Backside 40% share
bottom20_share = sum(tail(kind(rev_pct), ground(.N*.2))), # Backside 20% share
Primary statistics, quantiles.
# Primary statistics
unique_products = .N, # Variety of distinctive merchandise
revenue_total = sum(income), # Complete income
mean_revenue = imply(income), # Imply income per product
median_revenue = median(income), # Median income
revenue_sd = sd(income), # Income customary deviation# Quantile values
q20 = quantile(income, 0.2), # twentieth percentile
q40 = quantile(income, 0.4), # fortieth percentile
q60 = quantile(income, 0.6), # sixtieth percentile
q80 = quantile(income, 0.8), # eightieth percentile
Depend measures.
# Depend measures
above_mean_n = sum(income > imply(income)), # Variety of merchandise above imply
above_2mean_n = sum(income > 2*imply(income)), # Variety of merchandise above 2x imply
top_quartile_n = sum(income > quantile(income, 0.75)), # Variety of merchandise in prime quartile
zero_revenue_n = sum(income == 0), # Variety of merchandise with zero income
within_1sd_n = sum(abs(income - imply(income)) <= sd(income)), # Merchandise inside 1 SD
within_2sd_n = sum(abs(income - imply(income)) <= 2*sd(income)), # Merchandise inside 2 SD
Income above (or beneath) the edge.
# Income above threshold
rev_above_mean = sum(income[revenue > mean(revenue)]) # Income from merchandise above imply
), month]
The ensuing desk has 40 columns, and 72 rows (months).
As talked about earlier, it’s troublesome to think about, one would work with 40 ratios, so I’m slightly exhibiting a way methods to calculate them, and one ought to choose related ones. As all the time, it’s good to visualise and see how they relate to one another.
We will calculate correlation matrix between all ratios, or chosen subset.
# Choose key metrics for a clearer visualization
key_metrics <- c("gini", "hhi", "atkinson", "theil", "entropy", "hoover",
"top1_share", "top3_share", "top5_share", "unique_products")cor_matrix <- x3_ineq_dt[, .SD, .SDcols = key_metrics] |> cor()
Change column names to extra pleasant names.
# Make variable names extra readable
pretty_names <- c(
"Gini", "HHI", "Atkinson", "Theil", "Entropy", "Hoover",
"High 1%", "High 3%", "High 5%", "Merchandise"
)
colnames(cor_matrix) <- rownames(cor_matrix) <- pretty_names
And render the plot.
corrplot::corrplot(cor_matrix,
sort = "higher",
methodology = "shade",
tl.col = "black",
tl.srt = 45,
diag = F,
order = "AOE")
After which we are able to plot some fascinating pairs. In fact, a few of them have constructive or unfavourable correlation by definition, whereas in different instances it isn’t that apparent.
We began evaluation with ratios and Lorenz curve as a top-down overview. It’s a good begin, however there are two issues — the ratios have a comparatively broad vary, when the enterprise is doing okay, and there’s hardly connection to actionable insights. Even when we discover that the ratio is on the sting, or exterior of the secure vary, it’s unclear what we must always do. And directions like “lower focus” are just a little ambiguous.
E-commerce talks and breaths merchandise, so to make evaluation relatable, we have to reference to explicit merchandise. Individuals would additionally like to grasp which merchandise represent core 50%, 80% of income, and equally essential, if these merchandise keep persistently as prime contributors.
Allow us to take one month, August 2024 and see which merchandise contributed to 50% income in that month. Then, we test income from these actual merchandise in different months. There are 5 merchandise, producing (not less than) 50% income in August.
We will additionally render extra visually interesting plot with a streamgraph. Each plots present very same dataset, however they complement one another properly — bar plots for precision, whereas streamgraph for a narrative.
The pink line indicated chosen month. For those who really feel “itching” to shift that line, like in an old style radio, you might be completely proper — that must be an interactive chart, and really it’s, together with a slider for income share proportion (we produced it for a Shopper).
So what if we shift that pink ‘tuning line’ just a little bit backwards, possibly to 2020? The logic in information preparation may be very comparable — get merchandise contributing to a sure income share threshold, and test the income from these merchandise in different months.
With interactivity on two parts — income contribution proportion and the date, one can be taught rather a lot concerning the enterprise, and that is precisely the purpose of those charts. One can look from completely different angles:
- focus, what number of merchandise do we want for sure income threshold,
- merchandise themselves, do they keep in sure income contribution bin, or do they modify and why? Is it seasonality, a legitimate alternative, misplaced provider or one thing else?
- time window, whether or not we have a look at one month or a complete 12 months,
- seasonality, evaluating comparable time of a 12 months with earlier durations.
What the Information Tells Us
Our 6-year dataset reveals the evolution of an e-commerce enterprise from excessive focus to balanced progress. Listed here are the important thing patterns and classes:
With 6 years of knowledge, I had a novel likelihood to observe focus metrics evolve because the enterprise grew. Beginning with only a handful of merchandise, I noticed precisely what you’d anticipate — sky-high focus. However as new merchandise entered the combo, issues bought extra fascinating. The enterprise discovered its rhythm with a dozen or so prime performers, and the HHI settled into a snug 700–800 vary.
Right here’s one thing fascinating I found: focus and inequality would possibly sound like twins, however they’re extra like distant cousins. I seen this whereas evaluating HHI towards Lorenz curves and their Gini ratios. Belief me, you’ll wish to get comfy with the mathematics earlier than explaining these patterns to stakeholders — they’ll odor uncertainty from a mile away.
Wish to actually perceive these metrics? Do what I did: create a dummy dataset so easy it’s nearly embarrassing. I’m speaking primary patterns {that a} fifth-grader may grasp. Feels like overkill? Possibly, but it surely saved me numerous hours of head-scratching and misinterpretation. Hold these examples in your again pocket — or higher but, share them upfront. Nothing builds confidence like exhibiting you’ve performed your homework.
Look, calculating these ratios isn’t rocket science. The true magic occurs if you dig into how every product contributes to your income. That’s why I added the “present me the cash” part — I don’t consider in fast fixes or magic formulation. It’s about rolling up your sleeves and understanding how every product actually behaves.
As you’ve most likely seen your self, these streamgraphs I confirmed you might be virtually begging for interactivity. And boy, does that add worth! When you’ve bought your keys and joins sorted out, it’s not even that sophisticated. Give your customers an interactive instrument, and immediately you’re not drowning in one-off questions anymore — they’re discovering insights themselves.
Right here’s a professional tip: use this focus evaluation as your foot within the door with stakeholders. Present your product groups that streamgraph, and I assure their eyes will mild up. After they begin asking for interactive variations, you’ve bought them hooked. The perfect half? They’ll assume it was their concept all alongside. That’s the way you get actual adoption — by letting them uncover the worth themselves.
Information Engineering Takeaways
Whereas very often we usually know what to anticipate in a dataset, it’s nearly assured that there will likely be some nuances, exceptions, or possibly even surprises. It’s good to spend a while reviewing datasets, utilizing devoted features (like str, glimpse in R), searching for empty fields, outliers, but additionally merely scrolling by means of to grasp the info. I like comparisons, and on this case, I’d examine to smelling fish on a market earlier than leaping to organize sushi 🙂
Then, if we work with a uncooked information export, fairly doubtless there will likely be a number of columns within the information dump; in any case, if we click on ‘export all’, wouldn’t we anticipate precisely that? For many evaluation we are going to want a subset of those columns, so it’s good to trim and hold solely what we want. I assume we work with a script, so if it seems, we want extra, not a difficulty, simply add missed column and rerun that chunk.
Within the dataset dump there was a timestamp in a single row per transaction, whereas we wanted it per every product. Therefore some mild information wrangling to propagate these timestamps to all of the merchandise.
After cleansing the dataset, it’s essential to think about the context of study, together with the inquiries to be answered and the mandatory adjustments to the info. This “contextual cleansing/wrangling” is essential because it determines whether or not the evaluation succeeds or fails. In our state of affairs, the aim was to analyse product focus, subsequently filtering out variants (measurement, color, and so forth.) was important. If we had skipped that, the end result would have been radically completely different.
Very often we are able to anticipate some “traps”, the place initially it appears we are able to apply easy method, whereas really, we must always add a little bit of sophistication. For example — Lorenz curve, the place we have to calculate what number of merchandise do we have to get to a sure income threshold. That is the place I take advantage of rolling joins, which match right here completely.
The core logic to supply streamgraphs is to search out merchandise which represent sure income proportion in a given month, then “freeze” them and get their income in different months. The toolset I used was including further column, with a product quantity, after sorting per 30 days, after which enjoying with keys and joins.
An essential factor of this evaluation was including interactivity, permitting customers to play with some parameters. That raises the bar, as we want all these operations to be carried out lightning quick. The substances we want are proper information construction, further columns, correct keys and joins. Put together as a lot as potential, precalculating in a knowledge warehouse, so the dashboarding instrument shouldn’t be overloaded. Take caching into consideration.
Tips on how to Begin?
Strike a stability between delivering what stakeholders request and exploring probably precious insights they haven’t requested for but. The evaluation I offered follows this sample — getting preliminary focus ratios is easy, whereas constructing an interactive streamgraph optimized for lightning-fast operation requires vital effort.
Begin small and have interaction others. Share primary findings, talk about what you might be taught collectively, and solely then proceed with extra labor-intensive evaluation when you’ve secured real curiosity. And all the time keep a strong grip in your uncooked information — it’s invaluable for answering these inevitable ad-hoc questions shortly.
Constructing a prototype earlier than full manufacturing permits for validation of curiosity and suggestions with out devoting an excessive amount of time. In my case, such easy focus ratios sparked debates that ultimately led to the extra superior interactive research on which stakeholders rely right this moment.
I’ll present you ways I ready the info at every step of this evaluation. Since I used R, I’ll embody the precise code snippets — they’ll enable you to get began quicker, even in case you’re working in a unique language. That is the code I used for the research, although you’ll most likely must adapt it to your particular wants slightly than simply copying it over. I made a decision to maintain the code separate from the principle evaluation, to make it extra streamlined and readable for each technical and enterprise customers.
Whereas I’m presenting evaluation primarily based on Shopify export, there isn’t a limitation for a selected platform, we simply want transactions information.
Shopify export
Let’s begin with getting our information from Shopify. The uncooked export wants some work earlier than we are able to dive into focus evaluation — right here’s what I needed to take care of first.
We begin with export of uncooked transactions information from Shopify. It’d take a while, and when prepared, we get an e-mail with hyperlinks to obtain.
#-- 0. libs
pacman::p_load(information.desk)#-- 1.1 load information; the csv recordsdata are what we get as a full export from Shopify
xs1_dt <- fread(file = "shopify_raw/orders_export_1.csv")
xs2_dt <- fread(file = "shopify_raw/orders_export_2.csv")
xs3_dt <- fread(file = "shopify_raw/orders_export_3.csv")
As soon as now we have information, we have to mix these recordsdata into one dataset, trim columns and carry out some cleaning.
#-- 1.2 test all columns, restrict them to important (for this evaluation) and bind into one information.desk
xs1_dt |> colnames()
# there are 79 columns in full export,
# so we choose a subset, related for this evaluation
sel_cols <- c("Identify", "Electronic mail", "Paid at", "Success Standing", "Accepts Advertising", "Foreign money", "Subtotal",
"Lineitem amount", "Lineitem title", "Lineitem worth", "Lineitem sku", "Low cost Quantity",
"Billing Province", "Billing Nation")#-- mix into one information.desk, with a subset of columns
xs_dt <- information.desk::rbindlist(l = record(xs1_dt, xs2_dt, xs3_dt),
use.names = T, fill = T, idcol = T) %>% .[, ..sel_cols]
Some information preparations.
#-- 2. information prep
#-- 2.1 change areas in column names, for simpler dealing with
sel_cols_new <- sel_cols |> stringr::str_replace(sample = " ", alternative = "_")
setnames(xs_dt, outdated = sel_cols, new = sel_cols_new)#-- 2.2 transaction as integer
xs_dt[, `:=` (Transaction_id = stringr::str_remove(Name, pattern = "#") |> as.integer())]
Anonymize emails, as we don’t want/wish to take care of actual emails throughout evaluation.
#-- 2.3 anonymize e-mail
new_cols <- c("Email_hash")
xs_dt[, (new_cols) := .(digest::digest(Email, algo = "md5")), .I]
Change column sorts; this is determined by private preferences.
#-- 2.4 change Accepts_Marketing to logical column
xs_dt[, `:=` (Accepts_Marketing_lgcl = fcase(
Accepts_Marketing == "yes", TRUE,
Accepts_Marketing == "no", FALSE,
default = NA))]
Now we deal with transactions dataset. Within the export recordsdata, the transaction quantity and timestamp is in just one row per all objects within the basket. We have to get these timestamps and propagate to all objects.
#-- 3 transactions dataset
#-- 3.1 subset transactions
#-- restrict columns to important for transaction solely
trans_sel_cols <- c("Transaction_id", "Email_hash", "Paid_at",
"Subtotal", "Foreign money", "Billing_Province", "Billing_Country")#-- get transactions desk primarily based on requirement of non-null cost - as cost (date, quantity) shouldn't be for all merchandise, it is just as soon as per basket
xst_dt <- xs_dt[!is.na(Paid_at) & !is.na(Transaction_id), ..trans_sel_cols]
#-- date columns
xst_dt[, `:=` (date = as.Date(`Paid_at`))]
xst_dt[, `:=` (month = lubridate::floor_date(date, unit = "months"))]
Some further info, as I name them, derivatives.
#-- 3.2 is consumer returning? their n-th transaction
setkey(xst_dt, Paid_at)
xst_dt[, `:=` (tr_n = 1)][, `:=` (tr_n = cumsum(tr_n)), Email_hash]xst_dt[, `:=` (returning = fcase(tr_n == 1, FALSE, default = TRUE))]
Do now we have any NA’s within the dataset?
xst_dt[!complete.cases(xst_dt), ]
Merchandise dataset.
#-- 4 merchandise dataset
#-- 4.1 subset of columns
sel_prod_cols <- c("Transaction_id", "Lineitem_quantity", "Lineitem_name",
"Lineitem_price", "Lineitem_sku", "Discount_Amount")
Now we be part of these two datasets, to have transaction traits (trans_sel_cols) for all of the merchandise.
#-- 5 be part of two datasets
record(xs_dt, xst_dt) |> lapply(setkey, Transaction_id)
x3_dt <- xs_dt[, ..sel_prod_cols][xst_dt]
Let’s test which columns now we have in x3_dt dataset.
And additionally it is a second to examine the dataset.
x3_dt |> str()
x3_dt |> dplyr::glimpse()
x3_dt |> head()
Time for information cleansing. First up: splitting the Lineitem_name into base merchandise and their variants. In idea, these are separated by a splash (“-”). Easy, proper? Not fairly — some product names, like ‘All-Function’, include dashes as a part of their title. So we have to deal with these particular instances first, quickly changing problematic dashes, doing the break up, after which restoring the unique product names.
#-- 6. cleansing, aggregation on product names
#-- 6.1 break up product title into base and variants
#-- break up product names into core and variants
product_cols <- c("base_product", "variants")
#-- with particular remedy for 'all-purpose'
x3_dt[stringr::str_detect(string = Lineitem_name, pattern = "All-Purpose"),
(product_cols) := {
tmp = stringr::str_replace(Lineitem_name, "All-Purpose", "AllPurpose")
s = stringr::str_split_fixed(tmp, pattern = "[-/]", n = 2)
s = stringr::str_replace(s, "AllPurpose", "All-Function")
.(s[1], s[2])
}, .I]
It’s good to make validation after every step.
# validation
x3_dt[stringr::str_detect(
string = Lineitem_name, pattern = "All-Purpose"), .SD,
.SDcols = c("Transaction_id", "Lineitem_name", product_cols)]
We hold shifting with information cleansing — the precise steps rely in fact on a selected dataset, however I share my circulate, for example.
#-- two eventualities, to deal with `(32-ounce)` in prod title; we do not need that hyphen to chop the title
x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "ounce", negate = T) &
stringr::str_detect(string = `Lineitem_name`, pattern = "All-Purpose", negate = T),
(product_cols) := {
s = stringr::str_split_fixed(string = `Lineitem_name`, pattern = "[-/]", n = 2); .(s[1], s[2])
}, .I]x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "ounce", negate = F) &
stringr::str_detect(string = `Lineitem_name`, pattern = "All-Purpose", negate = T),
(product_cols) := {
s = stringr::str_split_fixed(string = `Lineitem_name`, pattern = ") - ", n = 2); .(paste0(s[1], ")"), s[2])
}, .I]
#-- small patch for exceptions
x3_dt[stringr::str_detect(string = base_product, pattern = "))$", negate = F),
base_product := stringr::str_replace(string = base_product, pattern = "))$", replacement = ")")]
Validation.
# validation
x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "ounce")
][, .SD, .SDcols = c(eval(sel_cols[6]), product_cols)
][, .N, c(eval(sel_cols[6]), product_cols)]x3_dt[stringr::str_detect(string = `Lineitem_name`, pattern = "All")
][, .SD, .SDcols = c(eval(sel_cols[6]), product_cols)
][, .N, c(eval(sel_cols[6]), product_cols)]
x3_dt[stringr::str_detect(string = base_product, pattern = "All")]
We use eval(sel_cols[6])
to get the title of a column sel_cols[6]
which is Foreign money.
We additionally must take care of NA’s, however with an understanding of a dataset — the place we may have NA’s and the place they don’t seem to be speculated to be, indicating a difficulty. In some columns, like `Discount_Amount`, now we have values (precise low cost), zeros, but additionally typically NA’s. Checking last worth, we conclude they’re zeros.
#-- take care of NA'a - change them with 0
sel_na_cols <- c("Discount_Amount")
x3_dt[, (sel_na_cols) := lapply(.SD, fcoalesce, 0), .SDcols = sel_na_cols]
For consistency and comfort, altering all column names to lowercase.
setnames(x3_dt, tolower(names(x3_dt)))
And verification.
In fact assessment dataset, with some check aggregations, and in addition simply printing it out.
Save dataset as each Rds (native R format) and csv.
x3_dt |> fwrite(file = "information/merchandise.csv")
x3_dt |> saveRDS(file = "information/x3_dt.Rds")
Conducting steps above we must always have a clear dataset, for futher evaluation. The code ought to function a tenet, but additionally can be utilized immediately, in case you work in R.
Variations
As a primary glimpse, we are going to test variety of merchandise per 30 days, each base_product, and together with all variations.
As a small cleansing, I take solely full months.
month_last <- x3_dt[, max(month)] - months(1)
Then we depend month-to-month numbers, storing in short-term desk, that are then joined.
x3_a_dt <- x3_dt[month <= month_last, .N, .(base_product, month)
][, .(base_products = .N), keyby = month]x3_b_dt <- x3_dt[month <= month_last, .N, .(lineitem_name, month)
][, .(products = .N), keyby = month]
x3_c_dt <- x3_a_dt[x3_b_dt]
Some information wrangling.
#-- names, as we would like them on plot
setnames(x3_c_dt, outdated = c("base_products", "merchandise"), new = c("base", "all, with variants"))#-- lengthy kind
x3_d_dt <- x3_c_dt[, melt.data.table(.SD, id.vars = "month", variable.name = "Products")]
#-- reverse elements, so they seem on plot in a correct order
x3_d_dt[, `:=` (Products = forcats::fct_rev(Products))]
We’re able to plot the dataset.
plot_01_w <- x3_d_dt |>
ggplot(aes(month, worth, shade = Merchandise, fill = Merchandise)) +
geom_line(present.legend = FALSE) +
geom_area(alpha = .8, place = position_dodge()) +
theme_bw() +
scale_fill_viridis_d(course = -1, choice = "G", start = 0.3, finish = .7) +
scale_color_viridis_d(course = -1, choice = "G", start = 0.3, finish = .7) +
labs(x = "", y = "Merchandise",
title = "Distinctive merchandise, month-to-month", subtitle = "Impression of aggregation") +
theme(... further styling)
The subsequent plot reveals the variety of variants grouped into bins. This offers us an opportunity to speak about chaining operations in R, notably with the info.desk package deal. In information.desk, we are able to chain operations by opening a brand new bracket proper after closing one — leading to ][ syntax. It creates a compact, readable chain that’s still easy to debug since you can execute it piece by piece. I prefer succinct code, but that’s just my style — use whatever approach works best for you. We can write code in one line, or multi-line, with logical steps.
On one of the plots we look at a date, when each product was first seen. To get that date, we set a key on date, and then take the first occurrence date[1]
per every base_product.
#-- variations per 12 months, product, with a date, when it was 1st seen
x3c_dt <- x3_dt[, .N, .(base_product, variants)
][, .(variants = .N), base_product][order(-variants)]x3_dt |> setkey(date)
x3d_dt <- x3_dt[, .(date = date[1]), base_product]
record(x3c_dt, x3d_dt) |> lapply(setkey, base_product)x3e_dt <- x3c_dt[x3d_dt][order(variants)
][, `:=` (year = year(date) |> as.factor())][year != 2018
][, .(products = .N), .(variants, year)][order(-variants)
][, `:=` (
variant_bin = cut(
variants,
breaks = c(0, 1, 2, 5, 10, 20, 100, Inf),
include.lowest = TRUE,
right = FALSE
))
][, .(total_products = sum(products)), .(variant_bin, year)
][order(variant_bin)
][, `:=` (year_group = fcase(
year %in% c(2019, 2020, 2021), "2019-2021",
year %in% c(2022, 2023, 2024), "2022-2024"
))
][, `:=` (variant_bin = forcats::fct_rev(variant_bin))]
The ensuing desk is precisely as we want it for charting.
The second plot makes use of transaction date, so the info wrangling is comparable, however with out date[1]
step.
If we wish to have a few plots mixed, we are able to produce them individually, and mix utilizing for instance ggpubr::ggarrange()
or we are able to mix tables into one dataset after which use faceting performance. The previous is when plots are of fully completely different nature, whereas latter is helpful, after we can naturally have mixed dataset.
For example, few extra strains from my script.
x3h_dt <- information.desk::rbindlist(
l = record(
introduction = x3e_dt[, `:=` (year = as.numeric(as.character(year)))],
transaction = x3g_dt),
use.names = T, fill = T, idcol = T)
And a plot code.
plot_04_w <- x3h_dt |>
ggplot(aes(12 months, total_products,
shade = variant_bin, fill = variant_bin, group = .id)) +
geom_col(alpha = .8) +
theme_bw() +
scale_fill_viridis_d(course = 1, choice = "G") +
scale_color_viridis_d(course = 1, choice = "G") +
labs(x = "", y = "Base Merchandise",
title = "Merchandise, and their variants",
subtitle = "Yearly",
fill = "Variants",
shade = "Variants") +
facet_wrap(".id", ncol = 2) +
theme(... different styling choices)
Faceting has large benefit, as a result of we function on one desk, which helps rather a lot in assuring information consistency.
Pareto
The essence of Pareto calculation is to search out what number of merchandise do we have to obtain sure income proportion. We have to put together the dataset, in a few steps.
#-- calculate amount and income per base_product, month-to-month
x3a_dt <- x3_dt[, {
items = sum(lineitem_quantity, na.rm = T);
revenue = sum(lineitem_quantity * lineitem_price);
.(items, revenue)}, keyby = .(month, base_product)
][, `:=` (i = 1)][order(-revenue)][revenue > 0, ]#-- calculate proportion share, and cumulative proportion
x3a_dt[, `:=` (
rev_pct = revenue / sum(revenue),
cum_rev_pct = cumsum(revenue) / sum(revenue), prod_n = cumsum(i)), month]
In case we’d must masks actual product names, allow us to create a brand new variable.
#-- merchandise title masking
x3a_dt[, masked_name := paste("Product", .GRP), by = base_product]
And dataset printout, with a subset of columns.
And filtered for one month, exhibiting few strains from prime and from the underside.
The important column is cum_rev_pct
, which signifies cumulative proportion income from merchandise 1-n. We have to discover which prod_n
covers income proportion threshold, as within the pct_thresholds_dt
desk.
So we’re prepared for precise Pareto calculation. The code beneath, with feedback.
#-- pareto
#-- set proportion thresholds
pct_thresholds_dt <- information.desk(cum_rev_pct = c(0, .2, .3, .5, .8, .95, 1))#-- set key for be part of
record(x3a_dt, pct_thresholds_dt) |> lapply(setkey, cum_rev_pct)
#-- subset columns (elective)
sel_cols <- c("month", "cum_rev_pct", "prod_n")
#-- carry out a rolling be part of - essential step!
x3b_dt <- x3a_dt[, .SD[pct_thresholds_dt, roll = -Inf], month][, ..sel_cols]
Why can we carry out a rolling be part of? We have to discover the primary cum_rev_pct
to cowl every threshold.
We’d like 2 merchandise for 20% income, 4 merchandise for 30% and so forth. And to have 100% income, in fact we want contribution from all 72 merchandise.
And a plot.
#-- information prep
x3b1_dt <- x3b_dt[month < month_max,
.(month, cum_rev_pct = as.factor(cum_rev_pct) |> forcats::fct_rev(), prod_n)]#-- charting
plot_07_w <- x3b1_dt |>
ggplot(aes(month, prod_n, shade = cum_rev_pct, fill = cum_rev_pct)) +
geom_line() +
theme_bw() +
geom_area(alpha = .2, present.legend = F, place = position_dodge(width = 0)) +
scale_fill_viridis_d(course = -1, choice = "G", start = 0.2, finish = .9) +
scale_color_viridis_d(course = -1, choice = "G", start = 0.2, finish = .9,
labels = perform(x) scales::p.c(as.numeric(as.character(x))) # Convert issue to numeric first
) +
... different styling choices ...
Lorenz curve
To plot Lorenz curve, we have to kind merchandise by it’s contribution to whole income, and normalize each variety of merchandise and income.
Earlier than the principle code, a helpful methodology to select n-th month from the dataset, from starting or from the tip.
month_sel <- x3a_dt$month |> distinctive() |> kind(lowering = T) |> dplyr::nth(2)
And the code.
xl_oct24_dt <- x3a_dt[month == month_sel,
][order(-revenue), .(
cum_prod_pct = seq_len(.N)/.N,
cum_rev_pct = cumsum(revenue)/sum(revenue))]
To chart separate strains per every time interval, we have to modify accordingly.
#-- Lorenz curve, yearly aggregation
xl_dt <- x3a_dt[order(-revenue), .(
cum_prod_pct = seq_len(.N)/.N,
cum_rev_pct = cumsum(revenue)/sum(revenue)), month]
The xl_dt
dataset is prepared for charting.
Indices, ratios
The code is easy right here, assuming adequate prior information preparation. The logic and a few snippets in the principle physique of this text.
Streamgraph
The streamgraph proven earlier is an instance of a chart which will seem troublesome to render, particularly when interactivity is required. One of many causes I included it on this weblog is to point out how we are able to simplify duties with keys, joins, and information.desk syntax particularly. Utilizing keys, we are able to obtain very efficient filtering for interactivity. As soon as now we have a deal with on the info, we’re nearly performed; all that is still are some settings to fine-tune the plot.
We begin with thresholds desk.
#-- set proportion thresholds
pct_thresholds_dt <- information.desk(cum_rev_pct = c(0, .2, .3, .5, .8, .95, 1))
Since we would like joins carried out month-to-month, it’s good to create a knowledge subset protecting one month, to check the logic, earlier than extending for a full dataset.
#-- check logic for one month
month_sel <- as.Date("2020-01-01")
sel_a_cols <- c("month", "rev_pct", "cum_rev_pct", "prod_n", "masked_name")
x3a1_dt <- x3a_dt[month == month_sel, ..sel_a_cols]
We’ve 23 merchandise in January 2020, sorted by income proportion, and we even have cumulative income, reaching 100% with the final, twenty third product.
Now we have to create an intermediate desk, telling us what number of merchandise do we have to obtain every income threshold.
#-- set key for be part of
record(x3a1_dt, pct_thresholds_dt) |> lapply(setkey, cum_rev_pct)#-- carry out a rolling be part of - essential step!
sel_b_cols <- c("month", "cum_rev_pct", "prod_n")
x3b1_dt <- x3a1_dt[, .SD[pct_thresholds_dt, roll = -Inf], month][, ..sel_b_cols]
As a result of we work with a one-month information subset (and selecting month with not that many merchandise), it is rather simple to test the end result — evaluating x3a1_dt
and x3b1_dt
tables.
And now we have to get merchandise names, for chosen threshold.
#-- get merchandise
#-- set keys
record(x3a1_dt, x3b1_dt) |> lapply(setkey, month, prod_n)#-- specify threshold
x3b1_dt[cum_rev_pct == .8][x3a1_dt, roll = -Inf, nomatch = 0]
#-- or, an equal, specify desk's row
x3b1_dt[5, ][x3a1_dt, roll = -Inf, nomatch = 0]
To realize 80% income, we want 7 merchandise, and from the be part of above, we get their names.
I feel you already see, why we use rolling joins, and might’t use simle <
or >
logic.
Now, we have to prolong the logic for all months.
#-- prolong for all months#-- set key for be part of
record(x3a_dt, pct_thresholds_dt) |> lapply(setkey, cum_rev_pct)
#-- subset columns (elective)
sel_cols <- c("month", "cum_rev_pct", "prod_n")
#-- carry out a rolling be part of - essential step!
x3b_dt <- x3a_dt[, .SD[pct_thresholds_dt, roll = -Inf], month][, ..sel_cols]
Get the merchandise.
#-- set keys, be part of
record(x3a_dt, x3b_dt) |> lapply(setkey, month, prod_n)
x3b6_dt <- x3b_dt[cum_rev_pct == .8][x3a_dt, roll = -Inf, nomatch = 0][, ..sel_a_cols]
And confirm, for a similar month as in a check information subset.
If we wish to freeze merchandise for a sure month, and see income from them in the entire interval (what second streamgraphs reveals), we are able to set key on product title and carry out a be part of.
#-- freeze merchandise
x3b6_key_dt <- x3b6_dt[month == month_sel, .(masked_name)]
record(x3a_dt, x3b6_key_dt) |> lapply(setkey, masked_name)sel_b2_cols <- c("month", "income", "masked_name")
x3a6_dt <- x3a_dt[x3b6_key_dt][, ..sel_b2_cols]
And we get precisely, what we wanted.
Utilizing joins, together with rolls, and deciding what may be precalculated in a warehouse, and what’s left for dynamic filtering in a dashboard does require some follow, but it surely positively pays off.