AI-Driven Layoffs Tracker
A running resource tracking layoffs at technology companies where AI automation is cited as a contributing factor โ from customer-support roles to software engineering positions.
programs.comat current global AI token consumption rate · see milestones โ
■ Live Feed
■ Daily Prophecy
■ Live Activity
■ Corporate Accountability
Rotating corporate press-release apologies, generated entirely without AI. (Probably.)
■ Leaked Intelligence
A live-scrolling feed of prompts being processed right now, globally. (Dramatised. Probably.)
■ Session Log
A live terminal log of what has happened globally since you arrived. A new entry appears every 15 seconds.
■ Countdown
Every scale of time is being consumed right now. The always-on panel below shows years, months, days, hours, minutes, and seconds simultaneously โ no clicks required. Click any block to jump the drill-down panel below to that time scale.
■ Environmental Events
Each threshold represents the cumulative global AI token count at which a symbolic environmental event occurs. Triggered milestones are highlighted in red.
■ Visualisation
■ Forecast
| Event | Threshold | Status | Predicted Date | Time Remaining |
|---|
■ Act Now
Every prompt matters. Small changes in how you use AI add up to billions of tokens saved โ and real environmental resources. The projections above show the worst case. Here's how we change the trajectory.
■ Personal Impact
■ Achievements
Earn badges for how long you stare into the abyss โ and for what you do while you're here.
■ Gamification
You know it's bad. Tap anyway. Every click adds tokens to your personal contribution to the apocalypse. Earn Doom Points, unlock upgrades, and trigger environmental milestones yourself.
Fire your human workforce. Deploy AI agents. Token maxx your way to Fully Automated Corp โ all in the name of being AI-Native.
You're doing great. Relatively speaking. Very relatively speaking.
| # | Name | Doom Points | Rank |
|---|
■ Latest Doom
A curated feed of reporting, research, and data on AI's environmental footprint, the displacement of human workers, and the resource race powering it all. Links open in a new tab. Dates reflect publication or last-updated date.
■ AI & Employment
A running resource tracking layoffs at technology companies where AI automation is cited as a contributing factor โ from customer-support roles to software engineering positions.
programs.comReal-time database of technology-industry layoffs since 2020, tracking company, headcount, date, and source. Useful context for the scale of workforce disruption happening alongside the AI boom.
layoffs.fyiOxford Martin School update on which jobs are at risk from AI and automation, estimating that up to 47 % of US jobs could be automated in the coming decades โ with the pace accelerating sharply since the release of large language models.
Oxford Martin SchoolThe International Monetary Fund warns that AI is poised to affect 40 % of jobs worldwide, exacerbating inequality between countries that can adapt and those that cannot.
IMF Blog■ AI & Environment
The International Energy Agency projects that data-centre electricity demand โ driven primarily by AI workloads โ will more than double between 2024 and 2030, consuming as much electricity as Japan does today.
IEAGoldman Sachs Research estimates that AI will drive a 160 % increase in data-centre power consumption by 2030, requiring the equivalent of more than 200 new power-plant projects to come online.
Goldman SachsPeer-reviewed paper estimating that training GPT-3 consumed roughly 700,000 litres of fresh water for cooling, and that a 20โ50 question chat session with ChatGPT evaporates about 500 mL โ the equivalent of a standard water bottle.
arXiv / UC RiversideMicrosoft disclosed that its carbon emissions rose 29 % year-on-year, driven by data-centre construction and AI power consumption โ directly contradicting its 2030 carbon-negative pledge.
MicrosoftGoogle's 2024 environmental report revealed a 48 % spike in greenhouse-gas emissions since 2019, primarily attributable to rising energy use at AI data centres โ undermining its net-zero commitments.
The GuardianNature Climate Change peer-reviewed analysis quantifying the lifecycle carbon footprint of large AI models, arguing that without urgent intervention the AI sector's emissions will far exceed aviation by 2030.
Nature Climate Change■ Research & Data
Stanford's comprehensive annual index tracking AI capabilities, economic impact, policy developments, and public perception โ includes a dedicated chapter on AI's environmental footprint and energy consumption trends.
Stanford HAIEpoch AI tracks the growth of compute used to train frontier AI models โ a key input to the token-consumption estimates on this site. Compute has grown approximately 4ร per year since 2010.
Epoch AIIEA dedicated report on the intersection of energy systems and artificial intelligence โ covering both AI as an energy consumer and AI as a potential tool for optimising energy grids and reducing emissions elsewhere.
IEAPeer-reviewed benchmarking of the energy cost of running common NLP tasks across different model sizes and hardware configurations. Provides per-token energy figures used widely in AI environmental-impact research.
arXiv■ Headlines
Wired investigates how AI data centres are straining local water supplies in drought-prone regions, including Arizona and the Netherlands, where communities are fighting back against hyperscaler expansion.
WiredReuters reports that US electricity demand is set to grow more than at any point in a generation, driven almost entirely by AI data centres โ and that the grid is not ready for the load.
ReutersBBC analysis of OECD data showing that knowledge workers in advanced economies face the greatest exposure to large language model automation, with legal, financial, and software roles under the most immediate pressure.
BBC NewsFinancial Times investigation into how Amazon, Google, and Microsoft are quietly walking back sustainability targets as AI infrastructure investment requires reopening coal and gas power plants.
Financial TimesMicrosoft struck a deal to restart the Three Mile Island nuclear reactor in Pennsylvania exclusively to power its AI data centres โ a stark illustration of the energy demands that the AI boom has created.
The GuardianCNBC reports on companies cutting entry-level hiring in software engineering, customer service, and data annotation as AI agents take on tasks previously requiring a junior employee โ shrinking the on-ramp for the next generation.
CNBCโ ๏ธ Links are to third-party sources and are provided for reference only. Token Deathclock is not affiliated with any linked publication. URLs may change over time โ if you find a broken link, let us know.
■ Origin
Every AI prompt silently consumes electricity, emits COโ, and evaporates cooling water โ yet every chat interface shows you exactly none of that. The AI Death Clock exists to make the invisible tangible.
AI inference servers run 24 / 7, globally, at enormous scale. The convenience of an instant answer obscures a very real chain of resource consumption that most users never see.
Global AI token consumption has grown by orders of magnitude since 2020 and is projected to keep accelerating. Training gets the headlines, but inference โ the constant, every-second generation of tokens โ is where most ongoing energy goes.
“65 quadrillion tokens” means nothing to most people. Translating it into kWh spent, COโ emitted, and trees required to offset it makes the scale viscerally real.
■ Methodology
We estimate cumulative global token consumption from public data (OpenAI usage disclosures, Epoch AI compute trends, AI Index), anchor it to a known point in time, and extrapolate forward at the current estimated global inference rate of 100 million tokens per second.
Environmental equivalents are derived from published per-token energy figures and IEA / Microsoft sustainability data (see Sources below). The milestones translate raw token counts into memorable environmental thresholds โ not to predict exact doom, but to give scale to what these numbers mean.
■ Transparency
All figures are illustrative estimates intended to communicate scale, not precise measurements. The underlying factors are:
| Metric | Value | Source |
|---|---|---|
| Energy per token | ~0.0003 kWh / 1K tokens | Google / DeepMind inference benchmarks; MLPerf |
| COโ intensity | 0.4 kg COโ / kWh | IEA global average grid intensity, 2024 |
| Water per token | ~0.5 L / 1K tokens | Microsoft Sustainability Report, 2023 |
| COโ per tree | ~21 kg COโ / year | US Forest Service estimates |
| Historical token growth | 2020 โ present | OpenAI usage blog; Epoch AI; AI Index 2024 |
■ Plain-Language Guide
Not a tech expert? No problem. Here are plain-language answers to the most common questions about AI, tokens, and why any of this matters.
AI (Artificial Intelligence) is software that can do tasks we used to think only humans could do โ like writing text, answering questions, translating languages, recognising photos, or generating images.
Modern AI systems (such as ChatGPT, Google Gemini, or Claude) are powered by large language models (LLMs) โ giant programs trained on billions of web pages, books, and articles so they can predict what words should come next in a sentence.
AI models don't read words โ they read tokens. A token is a small chunk of text, roughly 3โ4 characters on average. Common short words like "the" or "is" are a single token each. Longer words get split into several tokens.
Every time you send a message to an AI chatbot, it reads your message (input tokens) and writes a reply (output tokens). Both cost energy.
Inference is the moment an AI actually generates a response โ when it takes your question and produces an answer, word by word (or token by token).
It's different from training (which is the one-time process of teaching the AI using vast amounts of data). Inference happens billions of times every day, around the clock, every time anyone uses an AI tool anywhere in the world.
Training is the expensive, one-time (or infrequent) process of building an AI model. It requires running data through billions of mathematical operations on specialised hardware for weeks or months. A single training run can use as much electricity as hundreds of homes use in a year.
Inference is the ongoing, every-second process of using that already-trained model to answer questions. Each individual inference is cheap, but with hundreds of millions of queries per day across every AI product on Earth, the cumulative energy use is enormous โ and this is what this site tracks.
Generating each token requires running your input through a neural network with billions of mathematical multiplications on specialised chips called GPUs or TPUs. These chips are extremely powerful โ and extremely power-hungry.
A single AI server rack can draw as much electricity as 20โ30 family homes. Data centres housing thousands of those racks run 24 hours a day, 365 days a year. The chips also generate a lot of heat, requiring large cooling systems that consume even more electricity and water.
AI data centres use enormous amounts of water to cool their servers. When chips run at full power, they generate heat. That heat is carried away using chillers and cooling towers that evaporate large quantities of water into the air.
Microsoft's own sustainability report estimated that training a single large AI model can consume hundreds of thousands of litres of fresh water. Inference โ happening continuously โ adds billions more litres every year across the industry.
COโ (carbon dioxide) is a greenhouse gas. When power stations burn coal, oil, or natural gas to generate electricity, COโ is released into the atmosphere. This extra COโ traps heat from the sun, gradually warming the planet โ a process known as climate change.
Because AI data centres consume huge amounts of electricity, and much of the world's electricity grid still relies on fossil fuels, running AI produces a significant amount of COโ. Even data centres that use renewable energy have an indirect carbon footprint through manufacturing, water use, and grid draw during low-renewable periods.
It's complicated. AI can be a powerful tool for environmental good โ helping model climate systems, optimise energy grids, or accelerate scientific research. But the current rapid growth of AI token consumption is also a significant and fast-growing source of energy and water use.
The goal of this site is not to say "AI is evil" โ it's to make the hidden cost visible, so individuals and organisations can make more informed decisions about when and how to use AI, and so that pressure builds on AI companies to invest in efficiency and clean energy.
The numbers are estimates, not official measurements. No public real-time feed of global AI token consumption exists. We derive our figures from:
Published AI usage disclosures (OpenAI, etc.) ยท Epoch AI compute trend research ยท Academic papers on AI energy use ยท IEA electricity data ยท Microsoft sustainability reports.
The real-world figure could be higher or lower โ the point is to communicate order of magnitude. The counter is live in the sense that it extrapolates forward in real time from a fixed anchor point at the current estimated global rate. See the Data Sources section above for the exact figures used.
The current estimated global AI inference rate is around 100 million tokens per second โ generated by all the AI services (chatbots, search assistants, coding tools, image generators, API calls, etc.) running simultaneously across every country, every company, and every individual user on the planet.
That rate has grown by orders of magnitude since 2020, and is projected to keep growing as more AI products launch and more people adopt them. Even if one company's service is efficient, the sheer scale of global AI usage means the total climbs relentlessly.
Quite a lot, actually. Individual habits scale up when millions of people share them:
Write shorter, clearer prompts โ the more specific you are, the
fewer tokens the AI needs to generate a good answer.
Use smaller models โ for simple tasks (summarising a paragraph,
checking grammar), a lightweight model uses a fraction of the energy of a frontier
model.
Don't regenerate unnecessarily โ if an answer is good enough, use
it. Every regeneration is another burst of tokens.
Tell organisations you care โ write to your AI providers asking
about their energy roadmap. Consumer pressure works.
Share this page โ the more people understand the cost, the more
pressure there is to improve efficiency.
■ Take Action
Awareness is the first step โ but action matters more. Check the for the Tips section and the Personal Footprint Calculator to see your own impact and how to reduce it.
Fewer tokens in means fewer tokens out. Clear, specific questions outperform vague essays every time.
Smaller models handle most everyday tasks at a fraction of the compute cost of frontier models.
Re-reading the same content, regenerating the same boilerplate, or spinning up duplicate contexts all cost tokens unnecessarily.
Share this page. The more people understand the cost, the more pressure builds on AI providers to improve efficiency.
■ Release History
All notable changes to this project are documented here. The format follows Keep a Changelog and this project adheres to Semantic Versioning. View all official releases on GitHub Releases.