Back to Journal
Pre-ProductionShardstorm

Why I'm Using CSS to Build a Game UI

Unity has a built-in UI system. I'm not using it. How pain from Fizzics, muscle memory from SwiftUI, and a dark-on-dark aesthetic led to a three-system UI architecture for a mobile game.

March 24, 20266 min read

Unity ships with uGUI. It works. Millions of games use it. I used it for Fizzics and it got the job done.

I'm not using it for Shardstorm.

Not because it's bad. Because I've spent the last year building iOS apps with SwiftUI and websites with Tailwind CSS, and those tools rewired how I think about layout. When I looked at Shardstorm's nine screens and realized I'd be manually anchoring every panel, configuring every Canvas, and debugging EventSystem gotchas again, I went looking for something that felt more like the tools I'd gotten good at.

What I found was already inside Unity. I just hadn't looked at it before.

What Hurt in Fizzics

Fizzics uses uGUI for everything. Main menu, pause panel, game over popup. During Sprint 4, I purchased a neon sprite pack, generated TextMeshPro font atlases, and built out the full menu system by hand.

It worked. But "it worked" is doing a lot of heavy lifting in that sentence.

Canvas authoring in Unity is positional. You place elements by anchors and offsets. Need a button centered horizontally with 16 points of padding? Set the anchor to middle-center, adjust the rect transform, hope it holds across screen sizes. Need three cards stacked vertically with equal spacing? Add a Vertical Layout Group, configure padding, spacing, child alignment, child force expand. Then wonder why it looks wrong on a different aspect ratio and start adjusting the layout element min/preferred/flexible values.

It's not broken. It's just friction. Every layout decision requires three clicks and a mental model that doesn't transfer from anything else I've built.

The EventSystem was the one that actually cost me time. Unity buttons silently stop responding if there's no EventSystem in the scene. No error. No warning. Just dead buttons. I spent ten minutes in Fizzics staring at a perfectly styled neon button that refused to acknowledge my existence before remembering that particular gotcha. It's the kind of thing you learn once, but the fact that you have to learn it at all tells you something about the abstraction layer.

UI Toolkit: Flexbox in a Game Engine

Unity's newer UI system is called UI Toolkit. It uses UXML for structure (basically XML) and USS for styling (basically CSS). The layout model is Flexbox.

If you've built anything with CSS in the last decade, you already know how to use it. If you've used SwiftUI's HStack and VStack, same mental model. If you've used Tailwind's flex utilities, same muscle memory.

Three cards stacked vertically with equal spacing:

.card-container {
    flex-direction: column;
    gap: 16px;
    padding: 16px;
}

That's it. No layout group components. No child force expand toggles. No anchor math. The same pattern I use in React, the same pattern I use in SwiftUI, now working inside Unity.

The real win is USS variables for theming. Shardstorm's entire neon-on-dark aesthetic is defined in one stylesheet:

:root {
    --brand-primary: #4FC3F7;
    --neutral-void: #0A0E17;
    --neutral-surface: #141B2D;
    --spacing-md: 16px;
    --radius-large: 14px;
}

Change a variable, change every menu screen. That's the same pattern as Tailwind's design tokens or SwiftUI's environment values. I didn't have to learn a new concept. I just had to learn where Unity put the CSS file.

Three Systems, Not One

The counterintuitive decision was not picking one UI system. Shardstorm uses three.

UI Toolkit handles all menu screens: main menu, upgrade pick, meta-upgrades, run summary, settings, popups. These are traditional interface layouts. Cards, buttons, lists, modals. Flexbox is perfect for this.

World-space TextMeshPro handles the gameplay HUD: score, wave counter, combo text, damage numbers. These aren't UI panels floating above the game. They're objects IN the game world, using the same materials and shaders as the crystals and orbs.

The combo counter doesn't sit on a flat overlay. It glows like a crystal. It uses the same emission shader, the same bloom response. When a crystal shatters and the combo text scales up with an Impact Gold glow, it feels like part of the explosion, not a notification about the explosion.

PrimeTween handles all animation across both systems. This was a deliberate choice over DOTween, which is the default answer in most Unity projects. PrimeTween has zero garbage collection allocations. On a 60fps mobile game where every frame budget matters, that's not a nice-to-have. DOTween allocates on every tween creation. Over hundreds of UI animations per session (card reveals, score popups, combo scaling, screen transitions), those allocations add up and eventually trigger GC spikes. PrimeTween also has a modern async/await API and null-safe tween targets, which means fewer mystery crashes when a tween tries to animate a destroyed object.

Glow Replaces Shadow

Here's a design system insight that's specific to dark-themed games and took me a minute to internalize.

In a normal app, you use shadows to create visual hierarchy. Elevated elements cast shadows on the elements below them. Cards float above backgrounds. Modals float above cards. The shadow tells your eye what's on top.

In a game where the background is near-black (#0A0E17), shadows are invisible. You can't cast a dark shadow on a dark surface. The entire elevation model that works in every iOS app and every website just doesn't apply.

Glow replaces shadow. Instead of an element casting darkness downward, it emits light outward. The hierarchy becomes: brighter things are more important. A resting card has a subtle border glow at 15% opacity. An active card pulses at 30%. A selected item blazes at 50%. The glow color follows the brand palette, so a Crystal Blue button has a Crystal Blue glow and a Shard Violet upgrade node has a Shard Violet glow.

The design system spec defines five glow tokens (none, subtle, standard, intense, danger) that map to every interactive state in the game. They're the equivalent of Tailwind's shadow-sm through shadow-2xl, just inverted. Light instead of dark.

Twelve Components Before Any Code

The Playbook's pre-production phase asks you to specify your component library before building it. For Shardstorm, that produced twelve reusable components: PrimaryButton, SecondaryButton, UpgradeCard, MetaUpgradeNode, StatRow, ToggleRow, ModalPanel, HUDText, CurrencyDisplay, RarityBadge, ScreenHeader, AdButton.

Each one has defined variants, token mappings, state behaviors, animation specs, and sizing constraints. The UpgradeCard has three rarity variants (Common, Uncommon, Rare) with different border glow colors. The MetaUpgradeNode has four states (Locked, Available, Unaffordable, Maxed) with specific glow intensities for each. Every button meets the 44-point minimum touch target. Every animation has a named PrimeTween easing curve.

None of this code exists yet. It's all on paper.

That sounds like overhead, but it's the opposite. When I start building the upgrade pick screen, I won't be making design decisions in the Inspector. I'll be implementing a spec. The card slides in from the right with EaseOutCubic over 400ms, staggered 100ms per card. The selected card scales to 1.05x with glow/intense while the others fade to 0.3 opacity. It's already decided. I just have to type it.

Cross-Pollination Is the Point

The interesting thing about building across platforms is that patterns transfer. SwiftUI taught me to think in stacks. Tailwind taught me to think in tokens. React taught me to think in components with defined states. None of those tools were designed for game UI, but the mental models they built are exactly what UI Toolkit's UXML/USS system expects.

If I'd only ever built Unity games, I'd probably be fine with uGUI. Canvas anchoring is the water you swim in. But once you've used Flexbox, going back to manual anchor math feels like writing CSS with absolute positioning for everything. Technically possible. Unnecessarily painful.

The Playbook doesn't have a step called "bring what you learned from other platforms." But maybe it should.

What's Next

These specs feed directly into the Visual Target Build, which is still the gate. One crystal, one orb, one shatter with the full juice stack. The UI system gets validated when the combo counter glows like it belongs in the same world as the crystal it just helped destroy.

If the world-space HUD doesn't feel integrated with the gameplay, the three-system architecture was the wrong call. But I'm betting it won't feel that way. Because the whole point of putting the text in the game world is that it stops being UI and starts being part of the spectacle.