HORNET

HORNET is one of Germany’s largest interactive ultra high resolution display walls. 35 thin bezeled Full HD LCD displays are built up to form the seven times three square meter large display wall resulting in a resolution of approximately 72 megapixels. The pixel density is high enough to surpass the resolution of the human eye when standing more than 2 meters away from the displays. HORNET is equipped with an optical tracking system consisting of seven tracking cameras with active infrared illumination. HORNET facilitates large and highly detailed scientific visualization, interactive and collaborative working and is hence very well suited for industrial design processes and visual data analytics.

The 35 displays are powered by nine Nvidia GeForce GTX 780 graphic cards distributed over a cluster of three “Display PCs”. An additional cluster of twelve “Visualization PCs” equipped with a total of 36 Nvidia GeForce GTX TITAN graphic cards is available for rendering high quality images. These clusters are connected to one another via a 60 GBit/s fiber link.

The acquisition and installation of HORNET was facilitated by the 2013 FHInvest Programme of the German Federal Ministry of Education and Research (BMBF) and the Ministry of Innovation, Higher Education and Research of North Rhine-Westphalia (MIWF Nordrhein-Westfalen).

HORNET – Technical Key Data

Displays:

  • 35 Full HD 46″-displays (103×58 cm²).
  • 5.7 mm inter-panel bezel.
  • Displays are arranged in stacking frames to form a slightly curved 7×5 display array (overall curvature of 60 degrees, approximately 21:9 CinemaScope aspect ratio).
  • 2 connection modes
    • One Full HD signal to all displays, daisy chain DVI wiring, integrated video scaler.
    • One dedicated Full HD signal to each display for HQ visualization.

Display cluster:

  • 3 Display PCs provide direct display control.
  • PC configuration
    • Supermicro motherboard X9DRi-LN4F+ (4 16x PCIe 3, integrated IPMI management board).
    • 3 ASUS GTX780-DC2OC-3GD5, with 4 video outputs each.
    • 2 Intel Xeon E5-2609, 4 Cores, 2.4 GHz.
    • 64 GB DDR3-1600 RAM.
    • 1 Mellanox ConnectX-3 EN 10 GbE dual-port SFP+ network interface card for fiberglass network (both ports in use, aggregated for port bonding => 20 GBit).
  • Private fiberglass network via IBM BNT RackSwitch G8124ER.
    • 480 Gbit/s non-blocking switching throughput (full duplex).
    • Deterministic latency of only 570 nanoseconds.
  • Public network via 1GBit switch Netgear GS752TXS.
  • Dedicated management network.

Visualization cluster:

  • 12 Visualization PCs provide computational power for parallel and distributed computations with CUDA in HQ visualization (global illumination, big data, …).
  • Due to elevated noise and heat levels located in the server room.
  • PC configuration:
    • Supermicro motherboard X9DRi-LN4F+ (4 16x PCIe 3, integrated IPMI management board).
    • 3 ASUS GTX TITAN-6GD5.
    • 2 Intel Xeon E5-2609, 4 Cores, 2,4 GHz.
    • 64 GB DDR3-1600 RAM.
    • 1 Mellanox ConnectX-3 EN 10 GbE dual-port SFP+ network interface card for fiberglass network (one port in use, 10 GBit).
  • Private fiberglass network via IBM BNT RackSwitch G8124ER.
    • 480 Gbit/s non-blocking switching throughput (full duplex).
    • Deterministic latency of only 570 nanoseconds.
  • Public network via 1GBit Switch Allied Telesis AT-8000GS/24.
  • Dedicated management network.

Inter-cluster connection:
The IBM switches of the Display Cluster behind the display wall and of the Visualization Cluster located in the server room are interconnected with a data rate of 60 Gbit/s using port bonding.

Interaction:

  • 7 active infrared ARTTRACK3 cameras for robust 6DOF tracking.
  • ART Flystick2.
  • 5 Microsoft kinect cameras.

Modes of operation:

  • Display one Full HD signal from a single arbitrary source (e.g. a notebook) in large on the (entire) display wall via integrated video scaler.
  • Capture an Ultra HD signal from a single source (e.g. a notebook) using an extra PC equipped with a frame grabber board transmitting the signal to the display cluster which will then display the signal in a window on a SAGE desktop on the display wall possibly alongside other applications or signals.
  • A distributed renderer uses all connected video outputs of the display cluster to deliver a full resolution (i.e. 72 megapixel) picture on the display wall.