Mock M2 · Modeling the Streaming Video Energy Footprint
Forecasting Scenario analysis Carbon accountingThe problem
Streaming video — Netflix, YouTube, TikTok, Twitch, plus an upcoming wave of 4K and VR content — already accounts for a substantial share of global internet traffic. Each hour streamed consumes energy across data centers, networks, and end-user devices. As resolution and viewership scale, the cumulative carbon impact may rival entire industries.
The International Streaming Sustainability Council (ISSC) needs a forecast model and policy recommendations.
Requirements
- Identify and justify the components of the streaming energy footprint (data center, network transit, last-mile, end-user device, encoding overhead, content production). Note which dominate and which are easiest to change.
- Build a model that estimates today's annual energy consumption and CO₂ emissions from streaming. State your assumptions explicitly.
- Project the footprint to 2035 under three scenarios:
- BAU — current device mix, current resolution trends.
- Resolution boom — widespread adoption of 8K and VR.
- Efficient frontier — aggressive codec improvements (AV2), more renewable grid, device efficiency gains.
- Identify the three policy or technical interventions with the largest potential impact. Quantify their effect by modifying your model.
- Sensitivity analysis on the model — which inputs drive the most uncertainty?
- Write a one-page op-ed (700 words) for a general audience explaining whether streaming is "really" an environmental problem.
Useful starting data (rough)
| Item | Typical value |
|---|---|
| Global streaming traffic share | ~65% of consumer internet |
| Energy per GB of streamed data (whole chain) | 0.03–0.20 kWh/GB (huge disagreement in lit) |
| SD vs. HD vs. 4K bandwidth | ~1, 5, 25 Mbps |
| End-device share of total streaming energy | ~50–70% (TV >> phone) |
| Average grid carbon intensity (global) | ~0.45 kg CO₂/kWh |
| Codec compression gains | ~50% per generation (H.264→HEVC→AV1→AV2) |
Solution sketch
Base model
For each device class $d$ (TV, phone, tablet, laptop, VR headset): users $N_d$, hours streamed $h_d$, bitrate $\beta_d$, network energy per bit $\epsilon_d$, network overhead $\kappa_{\text{net}}$, device power $P_d$. Sum gives total kWh. Multiply by grid carbon intensity for CO₂.
Growth model
Don't use pure exponential — it explodes. Use logistic per-region with a saturation $K$ tied to population × max hours. Resolution upgrades are modeled as $\beta_d(t)$ trajectories per scenario. Codec gains modeled as $\beta_d(t) \to \beta_d(t) / \gamma(t)$ where $\gamma$ is cumulative compression.
2035 scenarios (illustrative numbers)
| Scenario | 2035 TWh/yr | 2035 MtCO₂/yr |
|---|---|---|
| BAU | ~700 | ~250 |
| Resolution boom | ~1,400 | ~500 |
| Efficient frontier | ~400 | ~80 (grid cleaner) |
(Numbers depend on the assumed kWh/GB and grid trajectory; sensitivity will show why.)
Top 3 interventions (likely)
- Default-bitrate policies on mobile — saves traffic without changing device-side energy much. High impact, low cost.
- Faster codec adoption (AV2 by 2028) — multiplicative on all device classes.
- Grid decarbonization — orthogonal but applies linearly; most net-zero gains here.
Sensitivity
The kWh/GB number is uncertain by ~6×. Sensitivity should show that this uncertainty dominates everything. Conclude from that, not by mumbling — the recommendation should be: "More transparent telemetry from platforms is the cheapest intervention because we can't optimize what we can't measure."
Self-grading focus
- Did you separate data-center, network, and device energy? They scale differently.
- Did you account for the huge uncertainty in kWh/GB?
- Are your scenarios qualitatively different, not just rescaled?
- Is the op-ed actually persuasive to a skeptic, or just lecture-y?