Skip to content

Inside the Hopbox APU: A Hardware Teardown

Every piece of software in the Hopbox stack is open-source. But the hardware is where the bits meet the wire. In this post, we crack open a Hopbox SD-WAN appliance and walk through every component — what it is, why we chose it, and the trade-offs involved.

The Hopbox CPE (Customer Premises Equipment) is a compact, fanless x86 appliance designed to sit in a retail store’s server closet, behind a reception desk, or in a branch office network rack. It runs OpenWrt, manages multiple WAN links, maintains encrypted tunnels, and provides local DNS — all without moving parts.

Quick specs:

ComponentSpecification
BoardAMD GX-412TC based (PC Engines APU heritage)
CPUAMD GX-412TC, 4 cores, 1.0 GHz
RAM2 GB / 4 GB DDR3-1333 (soldered)
StoragemSATA SSD, 16 GB
NICs3x Intel i211AT Gigabit Ethernet
Connectivity2x USB 3.0, 1x RS-232 serial
Power12V DC, ~6-12W typical draw
CoolingFanless, passive heatsink
EnclosureAluminum, wall-mountable

The most fundamental design decision: CPU architecture.

ARM-based boards (Raspberry Pi, NanoPi, various Qualcomm/MediaTek SoCs) are cheaper and more power-efficient. They’re the default choice for consumer routers and many embedded network devices. So why did we go with x86?

Software compatibility. OpenWrt supports ARM well for routing, but our stack goes beyond basic routing. We run:

  • WireGuard (performs well on both, but x86 has AES-NI for other crypto operations)
  • PowerDNS Recursor (benefits from more cache-friendly CPU architecture)
  • Prometheus node_exporter and our custom probe exporter
  • Ansible over SSH (needs a reasonably capable Python runtime)
  • Occasional ad-hoc diagnostics (tcpdump, iperf3, traceroute)

x86 gives us the full Linux ecosystem without cross-compilation headaches, library compatibility issues, or kernel module concerns.

Crypto performance. The AMD GX-412TC supports AES-NI. WireGuard itself uses ChaCha20 (which is fast on all architectures), but we also use AES for other operations. AES-NI on x86 gives us consistent, high-throughput encryption without dedicated hardware accelerators.

Driver maturity. Intel NIC drivers on x86 Linux are rock-solid. ARM SoC network drivers can be less mature, especially for features like hardware offloading, VLAN support, and multiqueue.

The trade-off: Higher power consumption (6-12W vs 2-5W for ARM) and higher BOM cost. For a device that runs 24/7 in a location that already has power infrastructure, this trade-off is acceptable.

The AMD GX-412TC is a quad-core “Jaguar” microarchitecture SoC, originally designed for embedded applications:

  • 4 cores at 1.0 GHz
  • 2 MB L2 cache
  • 64-bit x86 (amd64)
  • AES-NI support
  • TDP: 6W

It’s not fast by modern standards, but for a network appliance doing routing, NAT, firewalling, WireGuard tunneling, and DNS, it’s more than sufficient. Our production devices rarely exceed 30% CPU utilization under normal load.

The Jaguar core was also used in the original PS4 and Xbox One — a mature, well-understood microarchitecture with excellent Linux support.

RAM is soldered on the board (not upgradeable). Our fleet is a mix of 2 GB and 4 GB variants:

  • 2 GB: Sufficient for standard deployments (routing, NAT, WireGuard, DNS). Typical RAM usage sits around 500-800 MB.
  • 4 GB: Used for sites with higher DNS cache requirements, more complex firewall rules, or where we run additional monitoring agents.

OpenWrt’s minimal footprint means even 2 GB leaves significant headroom. The Linux kernel, OpenWrt’s base system, and all our services typically consume under 1 GB.

Each device has a 16 GB mSATA SSD. The choice of SSD over flash/eMMC was deliberate:

Why SSD:

  • Write endurance. Our devices log metrics, DNS query data, and system events continuously. Flash storage (NAND without wear leveling) would degrade quickly under this write pattern.
  • Performance. Firmware upgrades involve writing 100+ MB images. SSD write speeds make upgrades faster and more reliable.
  • Replaceability. mSATA is a standard form factor. If a drive fails, field replacement is straightforward.

Wear leveling considerations:

Even with an SSD, continuous writes from logging and metrics can be a concern over a 5+ year device lifespan. We mitigate this by:

  • Writing logs to a tmpfs (RAM disk) and flushing to SSD periodically
  • Using log rotation with aggressive size limits
  • Monitoring SSD health via SMART attributes (exposed as Prometheus metrics)
Terminal window
# Check SSD health on a Hopbox device
smartctl -a /dev/sda | grep -E "(Wear_Leveling|Power_On_Hours|Reallocated)"
# Example output:
# 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
# 177 Wear_Leveling_Count 0x0013 098 098 000 Pre-fail Always - 42
# 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 18762

Typical storage layout:

/dev/sda1 /boot 50 MB (kernel, bootloader)
/dev/sda2 / 4 GB (OpenWrt root filesystem)
/dev/sda3 /data 10 GB (logs, metrics buffer, config backups)
swap 1 GB
reserved ~1 GB (future use)

This is the component we’re most opinionated about. Every Hopbox device uses Intel i211AT Gigabit Ethernet controllers.

Why Intel NICs for a network appliance:

  1. Driver quality. The igb driver (in-tree Linux kernel driver for i211) is mature, well-maintained, and thoroughly tested. Intel NIC drivers are the gold standard in Linux networking.

  2. Hardware offloading. The i211 supports TCP/UDP checksum offload, TCP segmentation offload (TSO), and receive-side scaling (RSS). This reduces CPU load for high-throughput forwarding.

  3. VLAN support. Hardware VLAN tagging/stripping works reliably. Some cheaper NICs have buggy VLAN implementations that cause subtle packet corruption or dropped frames.

  4. Multiqueue. The i211 supports multiple TX/RX queues, enabling better multi-core utilization for packet processing.

  5. OpenWrt compatibility. The igb driver is included in OpenWrt’s default kernel configuration. No out-of-tree modules, no firmware blobs, no compatibility concerns.

What we avoided: Realtek NICs (common in consumer hardware) have historically had less robust Linux drivers, worse offloading support, and more quirks under high load. For a device that routes production traffic 24/7, we won’t compromise on NIC quality.

The 3 Ethernet ports are assigned as:

eth0 (Intel i211) → LAN (internal network)
eth1 (Intel i211) → WAN1 (primary uplink)
eth2 (Intel i211) → WAN2 (secondary uplink)

For sites with more than 2 WAN links, the third link is typically a USB Ethernet adapter (also Intel-based) or a 4G modem connected via USB.

The device runs on 12V DC input, drawing 6-12W depending on load:

  • Idle: ~6W (all NICs connected, minimal traffic)
  • Typical: ~8W (routing, NAT, WireGuard tunnel active)
  • Peak: ~12W (firmware upgrade + heavy traffic + all ports saturated)

Why 12V DC:

  • Common in networking equipment — compatible with standard 12V adapters and PoE splitters
  • Widely available replacement power supplies
  • Can be powered from a 12V UPS or battery backup

We recommend sites deploy the Hopbox with a small 12V UPS for ride-through during brief power interruptions. A 7Ah 12V battery can keep the device running for 6+ hours.

The device is completely fanless. An aluminum heatsink integrated into the top of the enclosure dissipates heat from the CPU:

  • Ambient 25C → CPU ~55C under sustained load
  • Ambient 40C → CPU ~70C under sustained load (still well within the 90C TDP limit)

In Indian operating conditions (retail stores, offices, sometimes non-air-conditioned spaces), we’ve seen ambient temperatures up to 45C during summer. The device handles this without thermal throttling.

The fanless design is important for reliability — fans are the most common point of failure in networking equipment. With no moving parts, the Hopbox should operate for 7-10 years without hardware maintenance.

The aluminum enclosure serves multiple purposes:

  • Heatsink. The top panel doubles as the CPU heatsink.
  • Shielding. EMI shielding for regulatory compliance.
  • Durability. No plastic parts that crack or yellow over time.
  • Mounting. Wall-mount holes on the bottom panel, rubber feet for desk placement.

The enclosure is designed to be unobtrusive in a retail or office environment — roughly the size of a paperback book.

Each device goes through a provisioning process before deployment:

  1. mSATA SSD is flashed with the current production firmware
  2. Device boots, runs a hardware self-test (NIC link, SSD SMART, RAM test)
  3. Unique device identity (serial number, WireGuard keypair) is generated and registered in the Hopbox Cloud API
  4. Site-specific configuration is provisioned via Ansible
  5. Device is shipped to the deployment site

A reasonable question: why build a custom appliance when you can buy a MikroTik, Ubiquiti, or TP-Link router for a fraction of the cost?

  1. Software control. We need full control of the operating system, the routing stack, the DNS resolver, the VPN implementation, and the monitoring agents. Consumer routers run vendor firmware with limited extensibility.
  2. Hardware quality. Consumer routers cut costs on NICs, storage, and power design. We’ve seen enough Realtek NIC driver bugs and flash storage failures to justify the premium.
  3. Uniformity. Managing 900+ devices requires a homogeneous fleet. One hardware platform, one firmware image, one set of Ansible playbooks.
  4. Lifecycle. Consumer router vendors discontinue models and firmware updates on 2-3 year cycles. We control our hardware lifecycle.

The Hopbox appliance isn’t exotic hardware — it’s well-chosen commodity components assembled for the specific demands of an SD-WAN CPE: reliable NICs, adequate compute, durable storage, passive cooling, and a form factor that fits in any environment. The magic isn’t in the hardware. It’s in the open-source software that runs on it.

v1.7.9