On the origins of the Hiperwall name

Many people are confused by the spelling of the Hiperwall® name, often misspelling it “Hyperwall” or even “Hyper Wall.”

The name Hiperwall is a registered trademark owned by the University of California (UC Irvine, in particular) and exclusively licensed for commercial use by Hiperwall Inc.

The goal of the research project led by Falko Kuester and myself when we were UCI professors was to develop technology to drive extremely high resolution tiled display walls. Our approach differed from that of other tiled display systems in that we wanted our system to scale easily to huge sizes, so we needed to avoid the centralized rendering system (read potential bottleneck) that most other had. Therefore, we put powerful computers behind the displays. These display nodes perform all the rendering work for their display and had little interaction with other display nodes. We use a central control node that simply commands the display nodes what to display, but doesn’t get in the path of the rendering, thus doesn’t bottleneck the system.

Because of this very distributed and highly parallel computing approach, our system is much more responsive than most other tiled display systems, therefore we called it the Highly Interactive Parallelized display Wall, or HIPerWall for short. The acronym is a little forced, because we had to ignore the word “display,” but the idea is pretty clear. You can see the research project logo on this image of the desktop screen for the HIPerWall Mini system we showed at Apple’s World Wide Developers Conference in 2006. At 72 million pixels screen resolution, the HIPerWall Mini was one of the highest resolution displays in the world at the time.

You’ll note that the “IP” in HIPerWall is highlighted in a different color. This is because we based our technology on the Internet Protocol (IP) rather than proprietary protocols or networks so we could interoperate and use standard, off-the-shelf equipment. This is one of the main reasons Hiperwall systems are so cost-competitive today: we use our advanced software on COTS computers, displays, and networks to create a powerful tiled display system without proprietary servers, amplifiers, and non-scalable bottlenecks.

About the same time we built HIPerWall, NASA Ames built a much smaller tiled display named Hyperwall, which surely led to some name confusion. NASA’s current Hyperwall is even higher resolution than the original 200 MPixel HIPerWall. In the meantime, Apple has made some displays for their stores to show iOS App sales, unfortunately naming them Hyperwall, too.

So to summarize, Hiperwall is the product derived from HIPerWall the research project. NASA and Apple both have Hyperwall systems, which are unrelated to each other and unrelated to Hiperwall.

Eclipse and Yoxos

I use Eclipse for my Java development. I used to use JBuilder Turbo, but it’s now so hard to get a license of it for more than one computer, I’ve given up and went to straight Eclipse.

Eclipse is a really good development environment with on-the-fly compilation and generally excellent features and a few annoyances. One of the biggest annoyances is its update/install system that usually doesn’t actually find updates and typically doesn’t do a good job of installing new components. One day, I tried to install the profiling tools and the install system had such a hard time finding the components to install, campus security blocked access to the Hiperwall lab because they were sure only malware would hit 70 FTP servers in a few seconds. No, it was Eclipse, it turned out, after I got us blocked a second time. So clearly no Eclipse component installs on campus. When trying at home, I game up after an hour or so of it not finding the components. This isn’t necessarily the fault of the Eclipse developers – they rely on free hosting for mirrors of the files, but the mirrors may not always be up to date or even complete.

Because of these troubles, I tried and am still using Yoxos. Yoxos creates a custom Eclipse at start time, which delays the initial start quite a bit as the components are downloaded, but if nothing changes, future starts are fairly fast. It allows you to select which components you want and then downloads (from Yoxos’ servers) and installs them for you. It works very well and I haven’t had any trouble with a Yoxos-built Eclipse.

The version I’m using is currently free, but as Yoxos is a commercial entity, they charge for some services and this version may eventually cost something. Whether it will be worth the money to save hassle depends on the cost. But for now, Yoxos is a terrific way to use Eclipse and is highly recommended for Java developers.

UCI EECS Colloquium Talk 2010

I will present a talk on “Hiperwall: From Research Project to Product” at the UCI EECS Colloquium at 5PM on Wednesday, Nov. 10, in McDonnell Douglas Engineering Auditorium.

The official announcement is here.

I made minor updates on 10/10, so be sure to get the updated presentation (below).

The presentation is Colloquium Presentation 2010 updated.

NEC Display Solutions Partners with Hiperwall

NEC Display Solutions announced today that they are partnering with Hiperwall for our software to power high-resolution display walls (sorry, I can’t stand the more limiting term “video walls”).

For more information, read their press release.

The History of HIPerWall:The Research Software (2005-2006)

The HIPerWall system was a pretty impressive collection of hardware for 2005, with 50 processors (more were added later), 50 GB of RAM, more that 10 TB of storage (we got a gift of 5TB worth of drives for our RAID system from Western Digital), and 50 of the nicest monitors available, but it was the software that really made it special. Remember that HIPerWall is an acronym for Highly Interactive Parallelized Display Wall. We took that interactivity seriously, so we didn’t just want to be able to show a 400 million pixel image of a rat brain, but we want to allow users to pan and zoom the image to visually explore the content. This user interactivity set the HIPerWall software apart from the other tiled display software available at the time and is still a major advantage over competing systems.

The original software was written by Sung-Jin Kim, a doctoral student at the time who was working on distributed rendering of large images. His software, TileViewer, was originally written to use Professor Kane Kim’s TMO distributed real-time middleware, but Sung-Jin ported it to Mac OS X and IP networking so it could work on HIPerWall. TileViewer ran on both the control node and on the display nodes. The control node managed the origin and zoom level of the image, while TileViewer on the display nodes computed exactly where the display was in the overall pixel space, then loaded and rendered the appropriate portion of the image. We preprocessed the images into a hierarchical format so the right level and image tiles (hence the name) could be loaded efficiently. The images were replicated to the display nodes using Apple’s very powerful Remote Desktop software. TileViewer also allowed color manipulation of the image using Cg shaders, so we took advantage of the graphics cards’ power to filter and recolor images. TileViewer didn’t have much of a user interface beyond a few key presses, so Dr. Chris Knox, a postdoctoral scholar at the time, wrote a GTK-based GUI that allowed the user to select an image to explore and then provided zoom and movement buttons that zoomed and panned the image on the HIPerWall. The picture below shows Dr. Chris Knox and Dr. Frank Wessel examining a TileViewer image on HIPerWall. The Macs are visible on the left of the image. The one below that shows Sung-Jin Kim in front of TileViewer on HIPerWall.

TileViewer in use on HIPerWall

Sung-Jin Kim in front of HIPerWall

The HIPerWall was built in the newly built Calit2 building at UCI. We knew HIPerWall was coming, so Professor Falko Kuester, the HIPerWall PI, and I, as Co-PI, worked to get infrastructure in place in the visualization lab. Falko was on the planning committee for the building, so we hoped our needs would be met. The building had good networking in place, though no user-accessible patch panels, but power was “value engineered” out. We quickly determined (blowing a few breakers in the process) that HIPerWall would need a lot more power than was available in the visualization lab at the time. The Calit2/UCI director at the time, Professor Albert Yee, agreed and ordered new power circuits for the lab. Meanwhile, postdocs Kai-Uwe Doerr and Chris Knox were busy assembling the framing and installing monitors into the 11×5 frame designed by Greg Dawe of UCSD. We had a deadline, because the Calit2 Advisory Board was to meet in the new UCI Calit2 building and Director Larry Smarr wanted to show HIPerWall. At somewhere around 3:00 PM on the day before the meeting, the electricians finished installing the power behind the wall. At that point, we moved the racks into place, putting 5 PowerMac G5s on each rack, installing Ethernet cables and plugging in the monitors and Macs to power. Once we booted the system, it turned out that TileViewer just worked. We were done making the system work by 6PM and it was a great surprise for Larry Smarr that HIPerWall was operational for the meeting the next morning.

Larry Smarr at initial HIPerWall demo

Falko Kuester at initial HIPerWall demo

Sung-Jin Kim then turned to distributed visualization of other things, like large datasets and movies, also in a highly interactive manner. The dataset he tackled first was Normalized Difference Vegetation Index data, so the new software was initially named NDVIviewer. This software allowed the import of raw data slabs that could then be color coded and rendered on the HIPerWall. In keeping with the “interactive” theme, each data object could be smoothly moved anywhere on the display wall and zoomed in or out as needed. Once again, the display node software figured out exactly what needed to be rendered where and did so very rapidly. The NDVI data comprised sets of 3D blocks of data that represented vegetation measured over a particular area over time, so each layer was a different timestep. The software allowed the user to navigate forward and backward among these timesteps in order to animate the change in vegetation. The picture below shows NDVIviewer running on HIPerWall showing an NDVI dataset.

NDVI visualization on HIPerWall

NDVIviewer was also able to show an amazing set of functional MRI (fMRI) brain scans. This 800 MB data set held fMRI brain image slices for 5 test subjects who were imaged on 10 different fMRI systems around the country to se whether machines with different calibration or from different manufacturers yield significantly different images (they sure seem to do so), for a total of 50 sets of brain scans. NDVI viewer allowed each scan to be moved anywhere on the HIPerWall, and the used could step through an individual brain by varying the depth or through all simultaneously. In addition, the Cg shader image processing could be used to filter and highlight the images in real-time. Overall, this was an excellent use of the huge visualization space provided by HIPerWall and never failed to impress visitors.

fMRI dataset visualization on HIPerWall

NDVIviewer could do much more than just show data slices. It showed JPEG images with ease, smoothly sliding them anywhere on the wall. It could also show QuickTime movies, using the built-in QuickTime capability of the display node Macs to render the movies, then showing the right portions of the movies in the right place. While this capability had minimal scientific purpose, it was always impressive to visitors, because a playing movie could be resized and moved anywhere on the HIPerWall. The picture below shows a 720p QuickTime movie playing on HIPerWall.

HD movie playing on HIPerWall

Sung-Jin Kim added yet another powerful feature to NDVIviewer that allowed it to show very high-resolution 3D terrain models based on the SOAR engine. SOAR is extremely well suited for tiled display visualization, because it is a “level-of-detail” engine that renders as much as if can of the viewable area based on some desired level of detail (perhaps dependent on frame rate or user preferences). NDVIviewer’s implementation allowed the used to vary the level of detail in real-time, thus smoothing the terrain or rendering sharper detail. The movie below shows SOAR terrain rendering on HIPerWall.

Because of the power and capabilities of NDVIviewer, I started calling it MediaViewer, a name which stuck with almost everyone. An undergraduate student, Duy-Quoc Lai, doing summer research added streaming video capability to MediaViewer, so we could capture Firewire video from our Panasonic HD camera and stream it live to the HIPerWall. Starting with the addition of streaming video in 2006, we started transitioning the software to use the SPDS_Messaging library that I had developed for parallel and distributed processing research in my Scalable Parallel and Distributed Systems laboratory.

In addition to TileViewer and MediaViewer, several other pieces of software were used to drive the HIPerWall. The SAGE engine from the University of Illinois, Chicago’s Electronic Visualization Lab was the tiled display environment for OptIPuter, so we ran it on HIPerWall occasionally. See the movie below for an example of SAGE on HIPerWall.

Dr. Chris Knox wrote a very ambitious viewer for climate data that could access and parse netCDF data for display on the HIPerWall. This allowed us to explore data sets from the UN Intergovernmental Panel on Climate Change (IPCC) on a massive scale. We could see data from many sites at once or many times at once, or both. This outstanding capability was a fine example of what HIPerWall was intended to do. The picture below shows one version of the IPCC viewer running on HIPerWall.

IPCC climate models explored on HIPerWall

Doctoral student Tung-Ju Hsieh also modified the SOAR engine to run on HIPerWall. His software allowed whole-Earth visualization from high-res terrain data sets, as shown in the movie below. This project was built to explore earthquakes by showing hypocenters in 3D space and in relation to each other. As before, each display node only renders the data needed for its displays and only to the level of detail specified to meet the desired performance.

Doctoral student Zhiyu He modified MediaViewer to display genetic data in addition to brain imagery for a project with UCI Drs. Fallon and Potkin to explore genetic bases for Schizophrenia. This research turned out to be very fruitful, as HIPerWall speeded up the discovery process for Drs. Fallon and Potkin. The image below shows Dr. Fallon on the left and Dr. Potkin on the right in front of HIPerWall. Photo taken by Paul Kennedy for UCI.

Drs. Fallon and Potkin in front of HIPerWall

Another software project started on HIPerWall is the Cross-Platform Cluster Graphics Library CGLX. This powerful distributed graphics library makes it possible to port OpenGL applications nearly transparently to tiled displays, thus supporting 3D high-resolution visualization. Professor Falko Kuester and Dr. Kai-Uwe Doerr moved to UCSD at the end of 2006 and continued development of CGLX there. CGLX is now deployed on systems around the world.

In the next article, I will cover new research software from 2007 on when I took over leadership of the project at UCI. This new software forms the basis of the technology licensed to Hiperwall Inc., significantly advanced versions of which are available as part of Samsung UD systems and as products from Hiperwall Inc. In a future post, I will cover the wonderful content we have for HIPerWall (and Hiperwall) and how easy it is to make high-resolution content these days.

Asymmetric Computing: Days of Cheap GPU Computing may be over

Reposted from my Asymmetric Computing blog.

For those of us interested in GPU computing, Greg Pfister has written an interesting article entitled “Nvidia-based Cheap Supercomputing Coming to an End” commenting on the future of NVIDIA’s supercomputing technology that has been subsidized by gamers and commodity GPUs. It looks like Intel’s Sandy Bridge architecture may end that.

If you don’t read Greg Pfister’s Perils of Parallel blog, you should. He’s been doing parallel computing for a long time and is very good at exposing the pitfalls and hidden costs of parallelism.

Added Hiperwall Description

I added a description of Hiperwall.

New bio

I added a new biographical summary to the “About me” page. It covers my work at Northrop Grumman, UCI, and Hiperwall Inc.

The History of HIPerWall:Hardware and Architecture

Once we won the NSF grant to develop HIPerWall, we had to decide the exact details of the hardware to purchase and nail down the hardware and software architecture. We knew that we wanted high-resolution flat panel monitors driven by powerful computers connected via a Gigabit Ethernet switch. We also knew that we did not want a rendering server (i.e., centralized rendering), but instead wanted the display computers to do all the work. We did want a control node that could coordinate the display computers, but it was only to provide system state to the display nodes and they would independently determine what needed to be rendered on their screen real estate and do so. We were not worried about things like software genlock or extremely tight timing coordination at the time.

We initially planned on using some Viewsonic-rebranded IBM 9 megapixel monitors that were intended for medical applications. These monitors met our “extremely high-res” requirement easily, but had three problems: They took 4 DVI inputs to drive the thing at full resolution, so we needed computers that could handle multiple video cards (not easy back in the AGP days); their refresh rate was something like 43 Hz when driven at full resolution, so movement and videos may not be smooth; and they were being discontinued, so became quite hard to get.

Just as we were getting discouraged, Apple came up with what became our solution: the amazingly beautiful 30″ Cinema Display. This display, which we ultimately chose, is 4 megapixels with reasonable bezel width (for the time), and was nearly as expensive as the computer that drives it. It requires Dual-Link DVI, because at 2560×1600 resolution, it is twice the resolution, hence twice the bandwidth, of a 1920×1080 High-Def TV. At the time, the only commodity machine that could drive the displays was the PowerMac G5. Apple had an agreement with NVIDIA that they were the only company that could sell the GeForce 6800 cards that had Dual-Link DVI for a while, so if we wanted those monitors, we would have to drive them with Macs. Since Frank Wessel and I were Mac users, this was fine with us, because we liked the development environment and Mac OS X. Falko was rightly concerned that Macs had typically lagged Windows in graphics driver support from NVIDIA, which may have meant that we would miss out or be delayed on important performance updates and capabilities. We arranged a trip to Apple HQ in Cupertino (at our own expense, though Apple did give us a nice lunch) to meet with some of Apple’s hardware and software leadership so we could make sure the G5s would work for us. We learned a few interesting things, but one that sticks with me is Apple’s philosophy of hiding hardware details. Both Windows and Linux allow programmers to set CPU affinity so a thread is locked to a CPU in order to prevent cache interference and pollution due to arbitrary scheduling (this was a big problem in operating systems at the time, but has been remedied somewhat since). Apple refused to expose an affinity API, because they figured the OS knew better than the programmers, just as Steve Jobs knows better than everyone else about everything (OK, so that’s a reasonable point). While we could live with that restriction, I was amused at the (possibly correct) assumption that programmers don’t know best.

Once we decided on the Apple 30″ Cinema Display and the PowerMac G5, we carefully worked to devise the right wall configuration to fit within our budget. We ended up deciding on an array of 55 monitors, 5 high and 11 wide. With the help of Greg Dawe from Calit2 at UCSD, we designed and contracted for a frame that would attach to the VESA mounts on the monitors. We ordered 10 PowerMac G5s (2.5 GHz wit 2GB RAM) and 10 monitors, so we could build a small 3×3 configuration and make sure things were working. Because these were dual-core G5s, I took one over to my Scalable Parallel and Distributed Systems (SPDS) Lab so one of my Ph.D. students could experiment to measure the cache-to-cache bandwidth. Unfortunately, someone broke into my lab (and a few others) and stole the G5, as well as a couple Dell machines and my PowerBook G3. Since the university is self-insured and the value of everything was probably less that $5000, we didn’t get any reimbursement. As a side note, I did install a video camera in my lab, which helped capture the guy when he came back for more.

Initial HIPerWall prototype system

The 3×3 wall was a success, but we decided to drive two monitors with each Mac, because the performance was pretty good and we could save some money. Because we built the wall 5 monitors high, we couldn’t easily use vertical pairs per machine, so we decided to skip the 11th column, which is how the HIPerWall ended up with 50 monitors and only 200 megapixels of resolution. Next time, I’ll write about the software.

The History of HIPerWall:Origins

This is my attempt to relate the history of the Highly Interactive Parallelized display Wall (HIPerWall) research project that led to the development of some of the highest resolution tiled display walls in the world and eventually led to Hiperwall Inc., which commercialized the technology. This is the first part of several that will explore the origins, architecture, and software evolution of the HIPerWall and related projects.

The project was conceived as a result of collaborative brainstorming between myself and my c0lleague Falko Kuester, an expert in computer graphics and visualization. For a few years, we had been exploring project ideas to combine large-scale parallel and distributed computing with visualization to allow scientists to explore enormous data sets. An earlier proposal to build a 100 megapixel cave wasn’t funded, but was well enough received that we were encouraged that we were on the right track.

We saw a Major Research Instrumentation opportunity from the National Science Foundation and decided to propose a flat-panel based high resolution display system. There were other, somewhat similar systems being developed, including one from Jason Leigh at the Electronic Visualization Laboratory at UIC called SAGE. These other systems made architectural choices and control choices that we wanted to try differently. For example, the best SAGE systems at the time consisted of a rendering cluster connected by 10Gbps networks to the machines driving the displays, thus turning all the data to be rendered into network traffic. We wanted to develop an approach that worked very well over 1Gbps networks, which were becoming common and inexpensive at the time. We also intended to make the system highly interactive and flexible enough to show lots of different data types at once.

We wrote an NSF MRI proposal entitled HIPerWall:Development of a High-Performance Visualization System for Collaborative Earth System Sciences asking for $393,533. We got a great deal of help from Dr. Frank Wessel, who led UCI’s Research Computing effort, in developing the project management approach and with reviewing and integration of the proposal. We included several Co-PIs from appropriate application disciplines, including pollution and climate modelling, and hydrology.

The proposal was particularly well-timed because of the pending completion of the California Institute of Telecommunications and Information Technology (Calit2) building at UCI. The HIPerWall would have a prime spot in the visualization lab of the new building, thus would not have to fight for space with existing project. The proposal also explored the connectivity to the OptIPuter, Larry Smarr’s ambitious project to redefine computing infrastructure, and to Charlie Zender’s Earth System Modelling Facility, an IBM supercomputer at UCI.

I got the call from the program manager that we won the proposal and we needed to prepare a revised budget, because as with most NSF proposals, they were cutting the budget somewhat. I called Falko and he quickly called the program manager back. He was sufficiently convincing and enthusiastic, that all the money was restored, as the NSF project page shows.

The next part of this series will cover the hardware and initial software architecture of the HIPerWall.