Summary: Algorithms that set prices are not magic black boxes. They are strategies in a repeated game. Under certain conditions, even straightforward algorithms — ones that never try to threaten or collude — can steer markets to higher prices. That happens when one algorithm’s learning rules meet another algorithm’s behavior and the pair settle into an equilibrium that rewards high prices. Regulators and platforms cannot rely only on finding secret agreements. They must pay attention to strategic dynamics: which algorithms are in play, how they learn, and what stable outcomes those interactions produce. This post walks through the mechanics, the experiments, the policy options, and the practical trade-offs, and ends with questions worth debating right now.
Why this matters right now
Prices affect people’s wallets. Firms rely on automated pricing to compete, protect margins, and respond to supply shocks. Regulators want fair competition. Marketplaces want predictable listings. When algorithms intersect with strategic reasoning, outcomes can look like collusion — high prices — even when no human ever shook hands. That gap between appearance and proof is what makes regulation hard. If you care about low prices, fair competition, or clean markets, you should care about how these algorithms behave in the wild.
Simple story, big lesson
Imagine two sellers racing to be cheapest. Traditional economics says that competition drives prices down. Now replace the shopkeepers with pricing programs that adapt with data. If these programs learn to punish sudden price cuts — by retaliating with big price drops that destroy profits for everyone — each program may prefer a tacit truce: keep prices high and avoid the war. That implicit threat — the threat of a price war — can sustain high prices without any explicit agreement. Notice the phrase: implicit threat, implicit threat. Saying it twice helps focus the problem: the mechanism is strategic interaction, not paperwork.
Game theory basics: regret, equilibrium, and learning
Game theorists study repeated plays of a strategic situation. A useful concept is regret: after many rounds, would a player have earned more by picking a different fixed strategy? Algorithms that aim for low regret learn to perform well against many opponents. No-swap-regret algorithms go further: they ensure that you cannot gain by consistently swapping one action for another across your past plays. Two no-swap-regret learners facing each other converge to a tidy equilibrium — one linked to the single-round optimal mix. For pricing, that can mean competitive prices. That promise made researchers hopeful: pick the right learning rule, and bad outcomes vanish.
No-swap-regret looked like the fix — until it didn’t
Researchers translated the old result to price competition. The 2024 work by Hartline and colleagues showed that when both sellers use no-swap-regret learners, the market trends to competitive pricing. Collusion seemed blocked. But theory needs to meet variety. Real markets do not force every seller to pick that same learning rule. What happens when only one seller uses a no-swap-regret algorithm and the other uses a different, benign-looking strategy?
The surprising weapon: nonresponsive strategies
A nonresponsive strategy ignores the opponent and plays fixed probabilities over prices. That sounds harmless: it cannot threaten, it cannot react. Yet Collina and Arunachaleswaran found something striking. Against a no-swap-regret learner, a nonresponsive opponent that places high probability mass on high prices — and occasionally chooses low prices — can coax the learner into raising its average price. The nonresponsive player then reaps profit from the times it undercuts. The result is high prices as an equilibrium. High prices, high prices. The phenomenon looks like collusion but has no treaty, no code review failure, and no smoking gun to present in court.
Why the result cuts against intuition
We expect threats to explain implicit collusion. We expect strategies that never react to be harmless. The math shows these expectations can be wrong. Equilibrium thinking matters: if neither player has an incentive to switch given the current strategies, the outcome is stable. Regulators who look for explicit agreements will miss stable high-price equilibria that rest on statistical learning and strategic compatibility. That is the hard point: stability without agreement.
Policy options and trade-offs
We have a few clear policy moves, each with trade-offs. One option is a technical standard: allow only algorithms that satisfy the no-swap-regret property. Hartline favors this. It reduces many dangerous interactions and gives a clean testable rule. You can even check the property without reading code, in some cases. That appeals to regulators who want enforceable rules.
But the standard is blunt. It locks sellers into a narrow class of learners. Firms may lose flexibility to innovate pricing models that serve legitimate purposes, like reacting to supply constraints, perishable inventory, or tailored discounts. Banning other algorithms may reduce some harm but create other costs. Can markets operate efficiently when firms face only no-swap-regret choices? Maybe in many settings — maybe not in all. Who decides?
Can we ban only “bad” algorithms?
An attractive idea is a behavioral rule: ban algorithms that facilitate tacit collusion — for instance, those that can generate mutual threats or coordinated price paths. That runs into legal and technical problems. How do you prove an algorithm intended to collude? The new results make the problem worse: even algorithms that cannot threaten can still drive prices up. Regulators would need to assert that the outcome itself is problematic. But outcomes can be produced by benign strategies, which complicates enforcement: you must then regulate outcomes, not intent or mechanism.
Platform controls and monitoring
Marketplaces like Amazon could limit the set of allowed pricing tools or run audits of pricing behavior. Platforms can set rules, detect suspicious synchronized moves, and intervene. This is attractive because platforms control access and data. But platforms also have conflicts: they profit from higher fees and may sell tools to sellers. Will they act as strict gatekeepers? That raises hard incentive questions. What will you ask of a platform that profits from both sides?
Practical steps for firms and compliance teams
If you run pricing algorithms, take these steps: 1) log decisions and training signals so you can explain behavior later; 2) run counterfactual tests that pit your algorithm against plausible opponent classes; 3) restrict randomization patterns that can induce others to raise prices; 4) document product-market reasons when you use strategies that produce high average prices. These are not full legal shields, but they build defensible practice. Regulators respond to clarity and records. Clear records buy you time and options.
Questions regulators should ask now
How do you define wrongful coordination when no one agreed? How do you distinguish harmful equilibria from efficient market responses? Which algorithms should be permitted on public marketplaces? Which should platforms block? Who audits the auditors?
Those are open questions. What trade-offs are you willing to accept: fewer algorithmic choices for sellers in return for lower risk of high-price equilibria? Or tolerate some risk while focusing on detection and enforcement? Which sounds more workable in your market?
Behavioral and legal framing
Antitrust law has long required intent or agreement in many cases to convict. The algorithmic problem strains that doctrine. You can sympathize with two views. One: courts must stick to bright lines — agreements are illegal; strategic equilibria are not. Two: law must adapt to outcomes that harm consumers even when there is no express agreement. Both views reflect real concerns: private liberty versus public harm. Saying 'No' to broad, outcome-based enforcement defends legal certainty; saying 'No' to a narrow doctrine of agreement defends consumers. Which 'No' do you prefer?
What researchers and policymakers should study next
We need work on several fronts: empirical studies on deployed pricing systems; diagnostics that expose which equilibria are likely given real-world agent mixes; audit methods platforms can apply without breaching commercial secrets; and legal theory that maps algorithmic equilibria into actionable enforcement standards. The theoretical findings are a warning: do not assume harmlessness from nonresponsive behavior. Ask: which mixtures of algorithms live in my market today?
How this affects you — consumers, firms, and platforms
Consumers face the risk of higher prices that look organic and legal. Firms must weigh short-term gains from strategies that exploit learning dynamics against long-term risks from enforcement or reputational harm. Platforms must decide whether to police algorithmic tools, and how aggressive to be. Each actor has legitimate aims and fears. A good policy balances those aims while protecting consumers’ buying power and market entry for smaller sellers.
Final framing and an invitation to debate
The headline is direct: algorithmic pricing can produce high prices without collusion. High prices, high prices. The mechanism is strategic interaction and equilibrium stability, not secret deals. That makes regulation harder but not impossible. Regulators can set technical standards, platforms can police tools, and firms can document decisions to defend themselves. Each path has winners and losers.
I want to hear your view: which trade-off would you accept — strict algorithmic limits to protect consumers, or looser rules that preserve sellers’ freedom to innovate? Which pieces of evidence would convince you that an outcome is harmful and worthy of intervention?
We must ask these questions and keep asking them. Silence helps make bad equilibria permanent. What will you ask next?
#AlgorithmicPricing #GameTheory #Antitrust #PricingAlgorithms #MarketDesign #AIethics #CompetitionPolicy
Featured Image courtesy of Unsplash and Jason Leung (7bpCPMOZ1rs)