Australian GPU Compute — Available Now

High-performance GPU compute,
hosted in Australia

We deploy NVIDIA H200 servers inside Tier III Australian data centres — enterprise-grade GPU compute available on-demand and via reserved capacity contracts. Australian data residency. AI/ML toolchain compatible.

H200
NVIDIA SXM · 141GB HBM3e
AU
Australian hosted & operated
Live
Foundation clients signed

What we offer

GPU compute you can access today

NVIDIA H200 servers colocated in Tier III Australian data centres. On-demand and reserved — built for AI and ML workloads that need genuine performance and local data residency.

On-demand

Hourly GPU access

Spin up H200 GPU compute by the hour. No long-term commitment — pay for what you use, when you need it.

Reserved

Capacity contracts

Reserved capacity for teams running sustained workloads. Predictable pricing, guaranteed allocation.

Residency

Australian data residency

All compute is physically hosted in Australia, operated by an Australian company. Your data stays in jurisdiction.

Compatible

ML toolchain ready

Compatible with Kubernetes, Slurm, and vLLM. Plug into your existing stack without rewriting workflows.


Who we serve

Built for teams that need Australian GPU compute

Foundation AI/ML clients are already signed and in the pipeline. Capacity is limited — reach out to secure your allocation.

AI developers ML engineers Research teams AI startups Enterprise ML LLM fine-tuning Inference workloads Australian data residency Government agencies Universities

Where we're headed

Phase 1 is live. Phase 2 scales it up.

H200 colocation is generating revenue now. Phase 2 expands capacity with next-generation GPU platforms and introduces AI consultancy services as the business matures.

We prove the demand and build the customer relationships first — then we scale the hardware. That sequencing is intentional.

GPU Compute

NVIDIA H200 compute, hosted in Australia

We deploy and operate NVIDIA H200 SXM GPU servers inside Tier III Australian data centres. Available now on on-demand and reserved capacity models.

Hardware

NVIDIA H200 SXM — best-in-class for inference and training

Lumus deploys the NVIDIA H200 SXM — current best-in-class GPU for AI inference and model training. 141GB of HBM3e memory and the performance headroom that serious ML workloads require.

  • NVIDIA H200 SXM — 141GB HBM3e memory per GPU
  • High memory bandwidth for large model inference and training
  • Multi-GPU configurations available for distributed training jobs
  • Diverse fibre connectivity to major Australian internet exchanges
Access models

On-demand and reserved capacity

Access GPU compute by the hour for flexible workloads, or secure reserved capacity contracts for sustained operations. Pricing is benchmarked against leading global GPU cloud providers — competitive rates for Australian-hosted compute.

  • On-demand hourly access — no long-term commitment required
  • Reserved capacity contracts — predictable cost, guaranteed allocation
  • Pricing benchmarked against Vast.ai, Lambda Labs, and equivalent platforms
  • Contact us for current pricing and availability
Software

Compatible with your existing ML toolchain

Lumus compute integrates with the tools AI developers and ML teams already use. No proprietary lock-in, no workflow rewrites. Access via standard APIs and schedulers.

  • Kubernetes — container orchestration for ML workloads
  • Slurm — HPC-style job scheduling
  • vLLM — optimised large language model inference
  • Standard API access for custom integrations
Infrastructure

Tier III+ Australian data centres

Servers are colocated in established Tier III or above data centre facilities in Melbourne or Sydney — redundant power, cooling, and connectivity built to industry-standard uptime specifications.

  • Tier III+ colocation — redundant power and cooling
  • Melbourne or Sydney facilities
  • Diverse fibre paths to major Australian internet exchanges
  • Australian jurisdiction — data residency guaranteed
  • Australian-owned and operated company

Ready to access Australian GPU compute? Get in touch for pricing and availability.

Our Roadmap

Live now. More capacity and platforms coming.

Two phases — prove the model with H200 colocation, then expand capacity and service lines as demand justifies. Each phase advances only once the previous one has delivered its target outcomes.

Phase 1 — Active now

GPU Colocation

NVIDIA H200 servers colocated in Tier III Australian data centres. Foundation clients signed. Revenue generating. Demand being validated.

Phase 2 — Planning

Scale & Expand

Expand GPU capacity with next-generation server platforms. Introduce additional colocation sites for geographic diversity. AI consultancy services evaluated for launch.


Phase 2 detail

Next-generation hardware and new service lines

Phase 2 scales what Phase 1 proves — more capacity, newer hardware, and expanded services driven by customer demand.

Hardware Roadmap

Next-generation GPU platforms

Phase 1 deploys NVIDIA H200. Phase 2 introduces current-generation platforms as they become available and as customer demand justifies. Hardware selection is driven by workload requirements and market availability — not speculative bulk purchasing.

  • NVIDIA DGX B300
  • NVIDIA GB300 NVL72
  • Application-specific accelerators (ASICs) where commercially appropriate
  • Additional configurations according to customer workload demand
  • Hardware-agnostic software stack supports rapid platform transitions
AI Consultancy

Advisory services — Phase 2

Lumus is evaluating the addition of AI consultancy services in Phase 2, informed by customer feedback and demand from Phase 1. Scope and delivery model to be confirmed.

  • Compute cost optimisation and power efficiency advisory
  • Inference cost modelling for AI product companies
  • Workload architecture guidance — model selection, batch vs real-time, GPU right-sizing
Infrastructure

Geographic expansion

Phase 2 continues to operate within third-party colocation facilities. Additional sites may be engaged to support geographic diversity, latency requirements, or capacity demand beyond what a single facility can serve.

  • Additional Tier III+ colocation sites as demand justifies
  • Geographic diversity across Australian jurisdictions
  • Continued Australian sovereign ownership and operation

Want to be involved in shaping Phase 2? Get in touch.

About us

An Australian company, building from the ground up

Lumus Technology Pty Ltd is an Australian AI compute infrastructure company — staged growth, disciplined capital, sovereign by design.

We operate on a staged growth model: generate early revenue and prove demand through GPU colocation, then scale capacity and service lines as market demand justifies.

Phase 1 is live. NVIDIA H200 servers are being deployed in Tier III Australian data centres, foundation AI/ML clients are signed and in the pipeline, and the business is generating revenue. We advance to the next stage only once the current stage has delivered its target outcomes.

Based in Melbourne. Australian owned and operated.

Tim
Co-founder — External lead
Business development, fundraising, and partnerships. Law and Economics graduate. Extensive supplier relationships across the semiconductor industry. Prior experience raising capital across property development projects.
Charlie
Co-founder — Internal lead
Operations, research, and financial modelling. Building deep technical knowledge of GPU infrastructure, data centre operations, and energy systems. Manages vendor relationships, client pipeline, and business execution.

Current status

Where we are right now

Phase 1 is active and moving. The foundation is in place.

Active

Foundation clients

AI/ML clients signed and in active pipeline.

In progress

NVIDIA engagement

NVIDIA Developer and Inception Program engagement underway.

Active

H200 procurement

Procurement deals signed. H200 servers being brought online.

Active

Investor engagement

Active investor and strategic partner conversations underway.

Contact

Let's start a conversation

Whether you need GPU compute, want to partner, or are exploring investment — we'd like to hear from you.

We work with AI developers, ML engineers, research teams, and enterprises that need high-performance GPU compute with Australian data residency. Capacity is limited — get in touch to discuss your requirements.

Based in Melbourne, Victoria, Australia Email hello@lumustechnology.com Focus areas GPU compute rental · Australian data residency · AI/ML workloads · Investment