Skip to content.

WYFI Investor FAQ

A resource for investors to better understand WhiteFiber’s mission, operations, and market opportunity in building high-performance infrastructure for generative AI.

Company & Business Overview

What is WhiteFiber?

WhiteFiber is a publicly traded, AI infrastructure company – purpose-built to meet the scalability, high power density, and performance demands of generative AI workloads.

We deliver end-to-end, high-performance AI infrastructure through GPU cloud services and our owned Tier 3 data centers. Our vertically integrated model spans power procurement, data center design, and cloud, colocation, and hybrid deployment.

What is WhiteFiber’s mission?

Our mission is to deliver the high-performance digital infrastructure required to power the next generation of artificial intelligence.

We combine advanced data centers, energy partnerships, and state-of-the-art GPU clusters to provide scalable, reliable, and efficient computing capacity. By doing so, we aim to enable our customers to accelerate innovation while supporting a sustainable and resilient digital economy.

What is your business model?

Our business model is built on two primary revenue streams: Cloud Services, where customers pay for access to GPU clusters, and Colocation Services, where customers lease data center space, power, and infrastructure.

Both revenue streams are driven by factors such as GPU deployment and utilization, power availability, customer onboarding, and expansion activity.

Who are your customers?

We serve platforms and enterprises that require high-performance computing for AI training, inference, and other compute-intensive workloads.

Our customers include two core groups: Enterprise Clients, who run their own high-performance workloads; and Cloud Customers, who need on-demand access to GPUs.

On the data center side, our current and prospective enterprise clients are active in multiple industries including healthcare, finance, and various technologies that rely on computers or models. Colocation customers range from enterprises to neoclouds.

On the GPU cloud side, we serve two types: direct end users (like AI applications), and GPU marketplaces that resell our compute power to a broad base of developers and ML teams.

All of our customers want one thing: high-performance, low-latency, AI-ready infrastructure.

Where are your operations based?

We are headquartered in Hudson Yards, New York City.

Our Tier 3 data center sites span strategic North American markets, balancing brownfield retrofits and greenfield builds, with international expansion planned via partner data centers and selective site acquisition. By targeting power-constrained zones and edge locations, we believe we can deliver capacity where demand is greatest while optimizing cost and deployment speed.

Our cloud services business primarily operates out of a third-party data center in Iceland.

What kinds of workloads are running on WhiteFiber infrastructure?

Our infrastructure supports platforms serving thousands of small and mid-sized AI users. The most common workloads include AI training and inference across applications such as large language models (LLMs), medical imaging, quantitative trading, and visual effects rendering.

These workloads are highly power- and bandwidth-intensive, which is why our facilities are engineered with 45kW racks, direct-to-chip liquid cooling, and N+1 redundancy for efficiency, reliability, and scale.

Market & Strategy

What is WhiteFiber’s market opportunity?

Demand for high-performance AI infrastructure is accelerating, with growth in global data center and AI cloud markets being driven by generative AI model training, inference, and other compute-intensive workloads that require next-generation performance.

AI infrastructure is mission-critical: the economic potential of AI cannot be fully realized without the computing capacity to power it – and we believe WhiteFiber is uniquely positioned to capture this demand.

How does WhiteFiber differentiate from its competitors?

WhiteFiber is purpose-built for AI workloads, offering high-density GPU clusters with advanced cooling, power, and network performance that hyperscalers serving generic workloads aren’t optimized to deliver.

We also own and operate the full stack (from data center facilities, power procurement to our cloud platform) giving us margin, speed, and customization advantages. Our vertical integration enables faster deployment timelines and more reliable scaling for customers who need to expand quickly.

What is WhiteFiber’s growth strategy?

We are expanding capacity through new data center builds, strategic site acquisitions, and partnerships, while continuing to invest in cutting-edge GPU infrastructure and energy solutions. We are expanding WhiteFiber’s infrastructure and capabilities to meet increasing demand for AI and data processing – actively designing for what’s next, not what’s now.

Stock & Investor Details

What exchange is WhiteFiber listed on, and what is the ticker symbol?

Our common stock trades on NASDAQ under the symbol “WYFI.”

How can I purchase shares?

Shares can be purchased through any licensed broker or online trading platform.

Who is WhiteFiber’s transfer agent?

Our transfer agent is Transhare Securities Transfer and Registrar. Shareholders with questions about stock certificates, address changes, or ownership may contact our agent, Kimberly Whiteside directly at tel: 303-662-1112.

Corporate Governance

Who are the members of WhiteFiber’s leadership team and board of directors?

Biographies of our leadership team and board members are available in the Corporate Governance section of our Investor Relations site.

Our board comprises WhiteFiber’s CEO Sam Tabar and CFO Erke Huang, as well as non-executive independent directors Ichi Shih, Bill Xiong, David Andre and Pruitt Hall.

What are WhiteFiber’s corporate governance policies?

We are committed to performing business with the highest levels of corporate governance. Our policies, including our Code of Conduct and Ethics and committee charters, can be found in the Governance section of our investor website.

Investor Communications

How can I receive investor updates?

You can sign up to receive our news and email alerts here.

Who do I contact for investor relations inquiries?

Please direct inquiries to ir@whitefiber.com.

Glossary of key terms

TermDefinition

Generative AI workloads

Tasks like training and running large AI models (e.g., ChatGPT, image generators) that require massive computing power.

GPU (Graphics Processing Unit)

Specialized computer chips originally built for graphics, now essential for AI because they can handle huge amounts of data in parallel. Think of them as the "engines" powering AI.

GPU Clusters

Groups of GPUs working together like a supercomputer to handle big AI jobs.

Cloud Services (GPU Cloud)

Renting or leasing access to GPUs over the internet instead of owning expensive hardware.

Colocation Services

Solutions offering access to space, power, cooling, and on-site support inside a third-party data center for operating hardware owned by the end customer.

Data Centers (Tier 3)

Data Centers (Tier 3) – Buildings filled with servers and networking equipment, engineered for reliability. "Tier 3" aligns to requirements set forth by the Uptime Institute guaranteeing high uptime and resilience.

High Power Density / up to 150kW Racks

Each server rack can use up to 150,000 watts of power—much higher than standard racks—allowing for more GPUs in a smaller footprint and accommodating emerging hardware design with higher power requirements.

Direct-to-Chip Liquid Cooling

A cooling method that uses liquid instead of air, to support modern GPUs while improving efficiency and power consumption.

N+1 Redundancy

Backup systems for power/cooling; if one component fails, another immediately takes over so operations never stop.

Brownfield Retrofit

Converting an existing building into a data center.

Greenfield Build

Constructing a new data center from scratch.

Edge Locations

Smaller data centers placed closer to where data is being generated/used, reducing delays.

AI Training vs. Inference

Training: Teaching an AI model using large amounts of data.
Inference: Using the trained model to make predictions or generate output.

Large Language Models (LLMs)

AI systems trained on massive text datasets to understand and generate human-like language.

Vertical Integration

Owning the full chain of operations (power, data centers, cloud services), giving more control, lower costs, and faster delivery compared to companies that outsource.

Hyperscalers

Cloud providers like Amazon, Google, or Microsoft that run large scale, multi-purpose cloud services.