While Nvidia hits a $5T market cap, AMD locks in $60B with Meta, and TSMC still cannot meet the AI chip backlog, an unexpected response just went live in alpha: HAON PowerHub, a peer-to-peer GPU marketplace that bills by the second and promises costs around one third of what AWS, GCP, or Azure charge today for the same hardware.
The proposition is simple and provocative — take the millions of graphics cards sitting idle in homes, studios, and corporate workstations across the world, and turn each one into a node available for anyone who needs to run AI, train models, generate images, or do 3D rendering. The owner earns on idle hours. The user pays only for what they ran. HAON takes a 5% commission and stays out of the data path.
The problem the market pretended did not exist
The AI industry today lives an absurd bottleneck: TSMC concentrated in Taiwan, HBM in chronic shortage, and hyperscalers (AWS, Google, Microsoft, Oracle) booking entire production runs of Blackwell and Instinct before the chips even leave the factory. The fallout is brutal for anyone outside the top 50 companies: independent researchers, digital artists, startup devs, solo creators — all fighting over leftover capacity at inflated prices.
On the other side, there is a silent army of idle hardware. Gamers with RTX 4090s who play three hours a night. Animation studios with farms idle on weekends. Crypto miners who watched mining go unprofitable. Corporate PCs powered down from 7pm to 9am.
HAON PowerHub joins the two sides.
How it works, in three steps
For GPU providers (miners): install an agent on a Windows machine (one-click installer), Mac, or Linux (terminal), set the hourly rate, and the card gets listed on the marketplace. No router port-forwarding needed — the agent is NAT-friendly and uses end-to-end encrypted tunnels.
For GPU users (workers): load prepaid credit via Stripe, pick the card by model (RTX 4090, A100, H100, etc.), region, and runtime, and launch the workload. Supports Ollama, ComfyUI, and any custom HTTP workload.
Billing: measured every 10 seconds. If you run 17 minutes and 30 seconds, you pay for 17 minutes and 30 seconds. Unused credit returns automatically at session end. Minimum 15 minutes per job.
The math that changes the game
An RTX 4090 on AWS today costs around $2.40/hour. On HAON, it goes for around $0.80/hour — one third. For anyone training LoRAs, generating video on open-source Veo, running batch inference, or processing images at scale for clients, this completely changes the cost equation.
For card owners, it is real passive income: a 4090 running third-party workloads 8 hours a day can pay the electricity bill and still leave a margin.
Privacy is not a footnote
The point that sets it apart from competitors: HAON does not store or proxy the workload. Data flows through end-to-end encrypted tunnels between worker and miner, without going through platform servers. HAON sits in the middle of the contract (matchmaking, payment, reputation), not in the middle of the data. For anyone running proprietary datasets or confidential models, this is critical — and it is exactly the kind of guarantee that AWS and GCP, by architecture, cannot provide the same way.
Who wins
- Independent researchers — finally able to train reasonably-sized models without credit grants
- Digital artists — own Midjourney, custom ComfyUI, 3D rendering at scale
- Startup devs — prototypes with open-weight models without burning runway on compute
- AI video creators — Sora-like, Veo-like, Wan-like running at home
- Animation studios — render farms on demand without capex
And on the supply side: gamers, ex-miners, studios with idle workstations, anyone with a decent card and reasonable electricity.
Current status: alpha, EU-WEST, Stripe test mode
The platform is in public alpha on European infrastructure (EU-WEST), with payments still in Stripe test mode. The site itself admits — "rough edges, expecting feedback". This is exactly when it pays off to enter: alpha feedback weighs far more than mature-product feedback, and early arrivals build reputation in the marketplace.
Near-term roadmap: expansion to US-EAST, support for professional cards (H100/H200), integration with fine-tuning platforms, and an on-chain reputation layer that will separate serious miners from opportunists.
Why this is bigger than it looks
Every time a wave of expensive hardware concentrates in few hands, the market eventually creates a peer-to-peer answer. It happened with hosting (Airbnb), transport (Uber), storage (Filecoin, Storj), CDN (Fleek). AI compute was waiting its turn.
HAON PowerHub will not dethrone AWS tomorrow. But it can do for AI compute what Airbnb did for hotels: create a parallel layer, cheaper, more distributed, and impossible to ignore — first by the small players, then inevitably by the big ones.
To get in (alpha is open, no waiting list yet): haon.run