We deploy NVIDIA H200 servers inside Tier III Australian data centres — enterprise-grade GPU compute available on-demand and via reserved capacity contracts. Australian data residency. AI/ML toolchain compatible.
NVIDIA H200 servers colocated in Tier III Australian data centres. On-demand and reserved — built for AI and ML workloads that need genuine performance and local data residency.
Spin up H200 GPU compute by the hour. No long-term commitment — pay for what you use, when you need it.
Reserved capacity for teams running sustained workloads. Predictable pricing, guaranteed allocation.
All compute is physically hosted in Australia, operated by an Australian company. Your data stays in jurisdiction.
Compatible with Kubernetes, Slurm, and vLLM. Plug into your existing stack without rewriting workflows.
Foundation AI/ML clients are already signed and in the pipeline. Capacity is limited — reach out to secure your allocation.
H200 colocation is generating revenue now. Phase 2 expands capacity with next-generation GPU platforms and introduces AI consultancy services as the business matures.
We prove the demand and build the customer relationships first — then we scale the hardware. That sequencing is intentional.
We deploy and operate NVIDIA H200 SXM GPU servers inside Tier III Australian data centres. Available now on on-demand and reserved capacity models.
Lumus deploys the NVIDIA H200 SXM — current best-in-class GPU for AI inference and model training. 141GB of HBM3e memory and the performance headroom that serious ML workloads require.
Access GPU compute by the hour for flexible workloads, or secure reserved capacity contracts for sustained operations. Pricing is benchmarked against leading global GPU cloud providers — competitive rates for Australian-hosted compute.
Lumus compute integrates with the tools AI developers and ML teams already use. No proprietary lock-in, no workflow rewrites. Access via standard APIs and schedulers.
Servers are colocated in established Tier III or above data centre facilities in Melbourne or Sydney — redundant power, cooling, and connectivity built to industry-standard uptime specifications.
Ready to access Australian GPU compute? Get in touch for pricing and availability.
Two phases — prove the model with H200 colocation, then expand capacity and service lines as demand justifies. Each phase advances only once the previous one has delivered its target outcomes.
NVIDIA H200 servers colocated in Tier III Australian data centres. Foundation clients signed. Revenue generating. Demand being validated.
Expand GPU capacity with next-generation server platforms. Introduce additional colocation sites for geographic diversity. AI consultancy services evaluated for launch.
Phase 2 scales what Phase 1 proves — more capacity, newer hardware, and expanded services driven by customer demand.
Phase 1 deploys NVIDIA H200. Phase 2 introduces current-generation platforms as they become available and as customer demand justifies. Hardware selection is driven by workload requirements and market availability — not speculative bulk purchasing.
Lumus is evaluating the addition of AI consultancy services in Phase 2, informed by customer feedback and demand from Phase 1. Scope and delivery model to be confirmed.
Phase 2 continues to operate within third-party colocation facilities. Additional sites may be engaged to support geographic diversity, latency requirements, or capacity demand beyond what a single facility can serve.
Want to be involved in shaping Phase 2? Get in touch.
Lumus Technology Pty Ltd is an Australian AI compute infrastructure company — staged growth, disciplined capital, sovereign by design.
We operate on a staged growth model: generate early revenue and prove demand through GPU colocation, then scale capacity and service lines as market demand justifies.
Phase 1 is live. NVIDIA H200 servers are being deployed in Tier III Australian data centres, foundation AI/ML clients are signed and in the pipeline, and the business is generating revenue. We advance to the next stage only once the current stage has delivered its target outcomes.
Based in Melbourne. Australian owned and operated.
Phase 1 is active and moving. The foundation is in place.
AI/ML clients signed and in active pipeline.
NVIDIA Developer and Inception Program engagement underway.
Procurement deals signed. H200 servers being brought online.
Active investor and strategic partner conversations underway.
Whether you need GPU compute, want to partner, or are exploring investment — we'd like to hear from you.
We work with AI developers, ML engineers, research teams, and enterprises that need high-performance GPU compute with Australian data residency. Capacity is limited — get in touch to discuss your requirements.