Turn spare GPUs into real revenue
Contributor AI matches your idle GPUs with enterprise workloads. Clients get reliable B2B compute, contributors earn automatically.
How Contributor AI works
Clients submit jobs
Companies describe their workload, SLA and budget. Jobs enter a global queue.
Contributors run tasks
You connect your machine with our agent. Tasks are scheduled based on hardware profile and rating.
Automatic payouts
When jobs complete, earnings are calculated per GPU‑hour and sent to your balance.
Why run Contributor AI on your GPUs?
- Monetize idle time of gaming rigs, homelabs or small GPU farms.
- Transparent per‑minute pricing, clear dashboards and job history.
- You control when the worker is online and which GPUs are used.
For B2B clients
- Run batch jobs, inference and fine‑tuning without buying hardware.
- Priority queues and SLAs for production workloads.
- Simple REST API and dashboards for your team.
How much can I earn?
Earnings depend on GPU model, uptime and demand. Example, not a promise of profit:
- RTX 3060, 6 GB — 8 hours/day, medium demand → roughly 50–90 USD/month.
- RTX 4090, 24 GB — 16 hours/day, high demand → roughly 250–450 USD/month.
- Datacenter GPU (A100/H100 class) — always‑on → significantly higher, contact us for custom terms.
FAQ (short)
What do I install on my machine?
A small open‑source agent that connects to Contributor AI, pulls tasks, runs containers and reports results. You can stop it at any time.
Can jobs see my personal files?
By default tasks run in isolated containers with access only to a working directory you control.
When are payouts made?
We aggregate earnings by GPU‑hours and pay out on a regular schedule (e.g. weekly or when you pass a threshold).