EU based

Fastest & Cheapest
CI Runners

Run GitHub Actions and GitLab CI pipelines smarter. Stop wasting money on idle runners and jobs — our Spike Instances scale to fit your workload in real time.

Run GitLab CI/CD and GitHub Actions jobs up to 2x faster and 10x cheaper. Pay for real-time usage, not for idle runners. 100% compatible with your current configuration. Run GitLab CI/CD and GitHub Actions jobs up to 2x faster and 10x cheaper. Pay for real-time usage, not for idle runners. 100% compatible with your current configuration.
The Problem

As your business grows,
slow CI and rigid billing inflate costs

Rigid Billing

You’re charged for time, not usage: costs become unpredictable as you scale.

Slow CI

Heavy workloads clog up CI pipelines, stalling builds and slowing teams down.

Wasted Cloud Spend

You’re paying for idle resources that your pipelines don’t actually use.

No Clarity

It's hard to optimize when you can’t see where your CI resources are going.

The Solution

Finally, Compute Platform
That Solves the Real Problems

High-performance by default

Get rocket-fast builds with generous CPU and memory limits — no tuning needed.

Fair, load-based billing

Every second, we track each job’s CPU and memory load. Only pay for load, idle CPU is free.

Seamless integration

Plug into your existing setup in minutes — no vendor lock-in, no headaches.

no trade-offs

Security and speed
are not a trade-off

Spike Instances: Mighty MicroVMs

Every job spins up a clean KVM-based instance that delivers near bare-metal performance and strong tenant isolation.

Read more

Ephemeral filesystem

Every job mounts a dedicated filesystem — a high-performance, on-demand storage that vanishes on completion, ensuring zero leftover data.

Read more

Pricing plans

Start for free, scale when you're ready. No hidden fees, no wasted spend. Upgrade only when your team is ready to grow.

Free

€0/m

1 integration with your GitLab or GitHub organization
10 concurrent jobs
12 vCPU and 32 GB of Memory per Job
Per usage price for resources
400 vCPU-minutes included monthly
€0.00002 per vCPU-second over included
800 GB-minutes included monthly
€0.000001 per GB-second over included
GitHub runners work with organisation accounts only

Business

€50/m

3 integrations with your GitLab or GitHub organization
unlimited concurrent jobs
48 vCPU and 96 GB of Memory per Job
Per usage price for resources
2000 vCPU-minutes included monthly
€0.00002 per vCPU-second over included
4000 GB-minutes included monthly
€0.000001 per GB-second over included
10 GB of Flexible Persistent File Storage included
€0.00000006 per GB-second over included
400 vCPU-minutes, 800 GB-minutes of Memory and Flexible Cloud Storage available in Trial.

Enterprise

custom integrations with your GitLab or GitHub organization
unlimited concurrent jobs
Custom resource limits

General

Free

Business

Enterprise

Dynamic load-based billing: pay for CPU-seconds and memory-seconds actually utilized by your process
Each job is running in its own clean, isolated virtual machine
Maximum concurrent jobs
10
unlimited
unlimited
vCPU allocated for each job
12
48
custom
Memory allocated for each job (GB)
32
96
custom
Flexible Persistent File Storage included (GB). Use it to share any data among your jobs.
0
10
custom
Job cache, ready to use with your job provider (GitLab, GitHub, etc.)
Use GPU in your jobs
Integrations (use them to isolate different teams, projects or organizations)
1
3
custom
Job runners per integration
1
3
custom
Manage your runners via dashboard or declarative API
Ticket-based support
1 business day
Priority support
Dedicated nodes for extra isolation of your workloads
Choose the region to run your jobs
GitLab CI
Maximum service containers per job
2
10
custom
Ephemeral storage allocated for each job (GB)
100
100
custom
Interactive Web Terminal, allowing real-time debugging of pipelines in your GitLab dashboard
GitHub Actions
Ephemeral storage allocated for each job (GB)
150
150
custom
GitHub environment fully compatible with the official GitHub Actions runners
Coming soon
Local Docker layer cache for even faster jobs
MS Windows execution environment
ARM-based compute nodes