UC Technology Commercialization Forum

On Thursday May 8th, I participated in the “Building a Product” panel at the University of California Technology Commercialization Forum at the Westin Hotel near San Francisco Airport. The day-long event had presentations by researchers who have products that are nearly ready to commercialize, as well as panels and talks by venture capitalists and members of industry.

The “Building a Product” panel was right after lunch, so it had great attendance. It was moderated by Dr. Thomas Lipkin from UCLAThe other panelists were Dr. Christine Ho of Imprint Energy and Dr. Michelle Brown of Olfactor Laboratories. We all had very different experiences in the transition from academia to startup, so the panel had plenty of different perspectives.

It was exciting to hear UC President Janet Napolitano mention Hiperwall and the other companies by name during her opening address!

Story about Hiperwall installation for US Coast Guard

Homeland Security Today magazine has published an article about how a powerful, yet cost-effective Hiperwall video wall solution helped the Coast Guard modernize their command center while maintaining and enhancing capability in a budget conscious manner.

Click the link in the paragraph above or use the URL below:

http://www.nxtbook.com/nxtbooks/kmd/hst_201402/#/28

 

Hiperwall is hiring sales people

We are hiring at least one Territory Sales Representative to sell Hiperwall systems. We are looking for candidates that have some sales experience (a year or two), as well as basic understanding of computer networking.

Hiperwall is a startup that makes software to drive video wall systems, and as our software scales for systems with just a few displays to systems with more than a hundred displays (yes, some of our customers have huge systems!), the sales growth potential is huge. The position is based in Irvine, CA.

If you or anyone you know is interested, check out:

http://www.linkedin.com/jobs2/view/10041247

 

NASA’s New Mars Panorama on Hiperwall

These are pictures of NASA’s new panoramic image from the Opportunity rover on Mars being displayed on Hiperwall. The image has more than 180 million pixels, so it would be very hard to see well on a single monitor. With Hiperwall, we can see the whole image, as shown in the first picture, by zooming out and having it fill the width of the wall.

Zoomed out to fill the Hiperwall’s 20 monitors

We can the zoom in and see more detail:

Closer view of Mars on Hiperwall

When the image is shown at full size (1:1 zoom), there is tremendous detail visible on each monitor:

Close-up Mars on Hiperwall

The Hiperwall software makes it easy to view high-resolution images such as these, as well as movies, streaming video, and live data feeds.

InfoComm 2012 wrapup

The Hiperwall booth at InfoComm in Las Vegas went very well. We brought a 12 monitor wall, consisting of 46” NEC thin-bezel monitors on a Premier mounting system and driven by Technovare Core i5 set-top PCs. We also borrowed two 55” monitors with embedded AMD PCs to show that our software is very flexible and can drive standalone monitors or even multiple display walls. We mounted the two monitors back to back on a single Premier mount with one monitor facing the aisle and the other into the booth. Our final configuration was the 12-panel wall, the 2×55” monitors configured as a separate wall, and a large-screen HP laptop configured as a third display wall. All these were controlled by a Gateway touchscreen PC and connected via compact 24-port gigabit Ethernet switch. Our sources were several minitower i5 and i7 PCs. One had a Datapath capture card connected to a 1080p Sony camera, while another had a webcam feed from a fancy D-Link pan-tilt webcam (we can control the pointing of the camera from the Hiperwall Control Node via our Sender’s built-in KVM capability), and another PC was running a very large, dense, and dynamic Excel spreadsheet that looked great on the display wall and showed that live content from proprietary applications is extremely easy to show on a Hiperwall.

A primary focus of our presentation was to show our new animation capabilities, so we ran a pre-release version of Hiperwall 3.0, and had several animation sequences configured into what we call “environments.” Our most spectacular one is a high-res photo of the Earth (NASA’s Blue Marble 2012) rotating (yes, we know the earth doesn’t rotate that way, but it looked good) with the Moon orbiting around it. The movie below is similar to the environment we used at the show, but we had our logo and other content on there as well. The major difference between our software and traditional video walls is that the animation is not a movie. Instead, we can animate any of our display content, including live feeds, on the fly, either through pre-built animations defined using our simple keyframe interface or via our XML-based web services interface. This means changes to animation steps or content being animated are trivial, which gives us a huge leg up on the traditional approaches.

Another animation we showed was a set of travel poster images designed by Saddle Ranch Digital for the Hiperwall system they installed for JetBlue. These spectacular posters were given life by our ability to move, scale, rotate, and filter them in real-time in animated sequences.

When these animations were running, passers-by stopped and stared, and many of them were intrigued enough to stay and talk to us and learn about our system. We had visitors from all over the world stop by. Some were customers, while many were dealers, integrators, and consultants. We even had a few of our competitors visit to see our product.

I believe the show went well. The booth looked good and all hardware worked! The Hiperwall software performed well, too, despite being a beta version (we did catch a bug with the Secondary Controller, so I’ve already fixed that). Many of the visitors that saw our capabilities told us they were very impressed, which is gratifying to hear that our hard work is well-received. Next year’s InfoComm is in Orlando, so I’ll hope to see you there!

Hiperwall: Building a Product from University Research UCI EECS Colloquium Presentation

I will be presenting “Hiperwall: Building a Product from University Research” at the UCI EECS Graduate Student Colloquium on 5/23. The presentation is linked below.

2012 UCI EECS Colloquium Presentation

Hiperwall 3.0 External Interface Supports Highly Dynamic Content

For several versions, Hiperwall software has offered a web-based External Interface (hereafter called the API) that allows external programs, including web browsers (yes, including Safari on iPad and iPhone) to open content and environments, close content, clear the display wall, etc. The Hiperwall 2.0 software added the ability to assign position and attributes to specific content as it is opened and to shutdown or sleep the wall (and wake it afterwards).

Hiperwall 3.0 adds an XML-based API that provides significant enhancements, including the ability to animate objects through sequences of commands that can change position, rotation, and visual effects, such as transparency and color filters. New abilities to query the size of display walls and specifications of available objects make it possible to write an application that can tailor itself to a display wall, even if that wall is comprised of multiple sub-walls distributed throughout a facility. These powerful API capabilities are similar to the new animation capabilities built into Hiperwall 3.0, an example of which is shown below, but even more flexible, since the API can be completely dynamic. The API turns a Hiperwall system into a giant sprite-drawing canvas where the sprites can be images, movies, live data feeds, streaming videos, or slideshows of any of them.

These new API capabilities are so powerful that I decided to write an example that showed just how dynamic Hiperwall content can be. Sure, a program like a slideshow or something similar would be pretty, but we already support slideshows natively, so I needed to come up with something really dynamic that demonstrates the ability to maintain control over a very energetic animation. I chose to write a Pong-like game called HiperPong. (My initial thought was Space Invaders, but I chose to simplify the idea to Pong because I want the program to be an example for our customers that want to use the API). This article explains the concept of operations for HiperPong, but does not provide code or documentation of the API. The source code and API documentation will only be available to Hiperwall customers via their authorized dealers once Hiperwall 3.0 is released, so please don’t request them from me.

HiperPong is a small Java program that runs on a networked computer (I’ve only tested it on Windows, but it should work on Mac or Linux). The UI is minimalist, as shown below. Since the program doesn’t have to run on the Hiperwall Control Node, a box at the bottom allows the user to type the hostname or IP address of the Control Node. The program then connects to the Control Node and queries the display wall dimensions, allowing the user to choose to play on a specific wall or all the displays together. It also queries the available content objects, and allows the user to choose objects for the two paddles and the ball. While the defaults look nice, you can make the ball a live video from a webcam or capture device, for example. There are two options to choose: whether to clear the wall when starting (so you can play the game over top of existing content running on the wall or not) and whether to have the program play by itself (otherwise the A and Z keys move the left paddle and the / and ‘ keys move the right paddle).

Once the user clicks the Start Button, the PlayingField code draws the center line down the middle of the wall and the left and right paddles, as well as the initial 0 and 0 scores. It does so by creating a Drawable object for each of them. Drawable is a class included in the HiperPong example code that maintains the state of a single object and allows the user to show the object on the wall and then animate it, and finally close it when it is no longer needed. The program makes the XML code to create the objects in the specified positions, and then uses an HTTP POST operation to send the commands to the Hiperwall Control Node. Each object is given a name (specified by the HiperPong program using a unique ID generator) at creation time. That name is then used by the program to modify the state of the object via the API. In this way, user programs can use their own mnemonic object names rather than having Hiperwall force names upon them.

After the playing field is created, the Game code starts a periodic task that runs 30 times per second. Each period, that task checks the status of the keys (if not auto-playing), sends commands to move the paddles, if appropriate, checks whether the position of the ball collides with the top and bottom walls of the playing field, checks if the ball is contacting the paddles, checks if the ball is out of bounds (scoring a point for one of the players), and sending a command to move the ball. The movement commands are not instant move commands, which would appear jerky, but use the Hiperwall software’s ability to interpolate movement, rotation, and other effects between the current state and the new state over a specified period of time. This allows smoother animations without having to manage at too fine-grain a level. The ball changes angle if it hits near the upper or lower edges of the paddles, and when the ball is hit by a moving paddle, the paddle imparts some spin onto the ball, which is causes the ball to rotate as it moves. If a score occurs, the old score value is commanded to appear to fly away by shrinking and becoming transparent while rotating 180 degrees, while the new score zooms into position from a very large and partially transparent starting position. The ball then resets to the center, blinks a few times, and then starts moving at a random angle. The game ends when one side gets 9 points, simply because I only made numbers from 0 to 9 as images that I imported to the display wall. A video showing HiperPong autoplay is below.

The HiperPong code shows how networked programs can use the Hiperwall API to create and manage very dynamic content on a Hiperwall display wall system. The API allows such code to create a named object, show it in a specified location with specified appearance effects, then animate it using absolute or relative movement and rotation as well as change the appearance over time. The HiperPong code shows how multiple XML commands can be combined into a single HTTP POST operation for efficiency. The code provides examples that can be used by Hiperwall customers to make their Hiperwall into powerful digital signage or advanced monitoring systems that can dynamically show events when they occur. These new capabilities to command the system over the network mean Hiperwall systems can be flexible, beautiful, and extremely useful with simple software using the API. Though HiperPong is written in Java, the API can be used by anything from Objective C to Python and anything that can make HTTP calls and process simple XML.

The Hiperwall 3.0 software is in Beta test at the time of this writing and will be shown at Infocomm in Las Vegas in June.

Cloud Computing for Home Has Huge Problems

We’re getting lots of examples of Cloud Computing for use at home these days. Examples include Apple‘s new iCloud, the Siri digital assistant built into the iPhone 4S, Google Documents and GMail, and cloud backup, like Mozy, Carbonite, and the one I use, CrashPlan. All of these store your data in the cloud (on servers somewhere on the Internet) and provide you services using that data. Cloud Computing means you don’t have to maintain infrastructure (servers and programs and such) and can use the services from nearly anywhere. It’s great for businesses that need to scale services quickly. So what’s the problem for home users?

The problem is that home Internet access isn’t up to the task of supporting the data intensive cloud services, and, even if it were capable, capacity limits put in place by our service providers will severely curtail the cloud’s usefulness. The examples below range from annoying to potentially catastrophic. For cloud computing to work for average people, these problems must be fixed. If not, a lot of people are going to have big problems, as described below.

Cloud backup is a great way to make sure your data is backed up to a remote location that will survive even if your house is burglarized or burns down. You run a program on your computer and it backs your data up to the cloud whenever you have a network connection. This means you always have a backup in case of disaster. The first problem anyone using these tools encounters is that it takes weeks to make that initial backup. That’s right – the upload speed from our homes is very slow, usually on the order of one or two million bits per second, and I think the cloud backup providers throttle even further, so the upload speed is typically not at your bandwidth limit. Once the initial backup is made, future backups are incremental, only sending changed data, so are usually fast. Problems can occur for people that use virtual machines (Parallels or VMWare, for example), because the virtual disks they use tend to be many GB, so just booting a VM guarantees a significant upload, even if only changed parts of the disk are sent. Everybody is getting better and better digital cameras all the time, so more and larger photos are being stored on hard drives and they also need to be backed up, along with our iTunes files and digital movie copies, etc. Things are pretty ugly, because even average users will soon have hundreds of GB of data that they care about and don’t want to lose.

All of the above is annoying, because our home Internet infrastructure stinks, but it gets worse: If you have a failure or loss and need to restore that backup of say 200GB, your Internet Service Provider (ISP) may prevent it. Even with the faster download speeds, such an undertaking will take days, but with capacity caps that are now being put into place by cable companies and other ISPs, we may be blocked when we hit that cap, or at least significantly slowed. Yes, I know some of the backup providers have services where for an outrageous fee, they will mail you DVDs or maybe a hard drive with your data, but that’s on top of the usual monthly charge. So if your MacBook gets stolen, not only will you need to buy a new one, but you’ll need to pay to get your files back or be blocked by your ISP. Not very comforting. Perhaps cloud backup isn’t as good a deal as we thought and we all should keep local backups as well (yes, I know that’s a good idea, but not nearly as low impact and convenient as cloud backup).

Apple’s iCloud is a new player in this game, and it will cause lots of trouble. One feature, PhotoStream, automatically uploads your photos to the cloud from your iPhone and then down to iPhoto. It really works and is surprisingly nifty. It took more than a GB of photos from my wife’s new iPhone 4S, sent them to the cloud, and the next day, they were in her iPhoto. That’s pretty handy! But wait, that means it uploaded a GB of photos to the cloud. Then it downloaded them again. Then iPhoto uploaded them again (at least I think that’s what it was doing when it was hogging my internet connection all day). So we’re aiming for those ISP-enforced capacity caps without even knowing it.

Even the nifty Siri assistant built into the iPhone 4S uploads the commands to the cloud for interpretation (and the results may require internet data too). So the data plan from your phone company, unless it is unlimited, will be slowly eaten away by constant Siri use. It may not be much, but it isn’t nothing.

In short, there are companies selling us cloud services for the home that will be strongly affected by limitations imposed by our network connections and by our ISPs. Before long, these competing interests will collide and we, the consumers, will be screwed. We will have to pay more if we want to use these very handy cloud services.

I have some (not nearly comprehensive) suggestions on how to avoid such a crisis.

  1. ISPs should track data usage as cloud service usage grows and adjust their capacity caps upwards as needed so even above average users never hit them. The ISPs always say the caps only affect the top 1% or less, so they should keep it that way.
  2. Allow occasional exceptions to the capacity caps. If someone calls and says they are restoring a cloud backup, lift the cap that month, as long as it is a rare event.
  3. The services should allow preferences to be set to make sure we don’t upload or download too much so we trigger these caps.

Essentially, these cloud computing services will transform all of us into heavy data users on our networks, so it will no longer be people downloading porn or pirating movies or songs that are the big bandwidth hogs, but ordinary people that take photos and movies with their phones and back up their media libraries. No longer will the ISPs be able to claim that it is only abusers that are using all their bandwidth, because it might be all of us, but just happening behind our backs by automatic programs accessing the cloud for us, but without us explicitly initiating it.

New Hiperwall version significantly enhances functionality

Hiperwall Inc. today announced the new version 2.0 of the Hiperwall display wall software. The new version significantly enhances functionality of existing components and adds two new ones that are very powerful. See the announcement or the Enhancements list for an overview of what is new, but I’ll mention a few of the new features/capabilities and describe why they are significant.

  • Security: Any connections that could connect from outside the Hiperwall LAN (such as Senders and Secondary Controllers) use authenticated SSL connections to enhance the security and integrity of the system. Sender connections even within the LAN use SSL to authenticate the connection.
  • Multi-Sender: The new Sender can deliver multiple portions of a computer’s screen to the display wall. This means several applications or data feeds can be shown from a single machine. Of course, the entire screen can be sent, as before. Sender performance is also improved, particularly when a Sender window is shown across a large portion of the display wall.
  • Secondary Controller: While the usual Control Node is very powerful and easy to use, Secondary Controllers are even more intuitive and easy to use. Secondary Controllers can be anywhere in a facility to control walls distributed throughout the area. They show a low-bandwidth view of the content on the display walls, so they can be used over wireless or at home to monitor the wall’s contents and behavior. They can also focus on a single display wall (in a multi-wall configuration) or show all active objects. You can see how easy the Secondary Controller is in the following video.

  • Share: Until now, the Sender has been able to show applications and other data on a Hiperwall from anywhere across the Internet. With Share, Senders can be shared with several Hiperwall systems, enabling collaboration and communications across distributed sites. Share automatically adjusts the data rate based on link conditions to each display wall it connects to, so systems connected via lower speed links will not slow down the data feed to systems connected via fast links.
  • Streamer: The Streamer can now send what is shown on a display device, in addition the the usual capture device and movie file streaming. This is not meant to replace the Sender. which sends the contents of a computer’s displays, but typically provides a higher frame rate at the expense of much higher network bandwidth.
  • Text: Generate attractive text labels and paragraphs with any installed font in any color and with colored or transparent backgrounds. This is great for digital signage or even labeling Sender or Streamer feeds.
  • Slideshows: Slideshows now have more advanced transitions, so attention-grabbing wipe and fly motions can be used.

There are many other great new features and capabilities, but the ones listed here are the ones I think will have the biggest impact on our already very easy to used display wall software. The Secondary Controller makes content manipulation even easier and more intuitive than before, so customers can take advantage of Hiperwall’s incredible interactivity and flexibility. Share makes sharing content among walls and among sites quick and easy. Even small features, like content previews, make the Hiperwall experience even better than before. Visit Hiperwall.com for more information.

Hiperwall Features and Software Development

Now that we have released a maintenance update to our third software release and are closing in on our fourth release (likely this Summer), I’ll comment on how our development has changed and how we focus on what to develop and when.

At the start of 2007, the HIPerWall software primarily consisted of two programs: the original TileViewer, which handle big image viewing, and the very interactive NDVIviewer that displayed regular images, movies, video streams and more — I called it MediaViewer (more details on both can be found in this article).

I was lucky to hire Dr. Sung-Jin Kim back to the HIPerWall project as a postdoctoral researcher, and together we set about transforming the software. Note: When I write HIPerWall, it designates the research project, which is distinct from the Hiperwall company.

Sung-Jin developed a new TileViewer that could handle all the MediaViewer features as well as deal with big images much better than the original TileViewer. He added the ability to rotate anything from a playing movie to a billion pixel image in real-time and interactively. This new TileViewer formed the basis of the Hiperwall technology licensed from UCI to the Hiperwall company. Today’s product, however, bears little resemblance to that old code.

Over the years, many of the thousands of visitors we had to HIPerWall expressed their interest in running their software in high resolution on the wall. When told this entailed lots of parallel and distributed computing programming as well as a significant overhaul of their drawing code, people shied away. We decided we needed a way for people to show their applications on the tiled display without having to rewrite their code. We also wanted to provide the ability to use proprietary programs, like PowerPoint, CAD, GIS tools, etc. One way of doing this is to capture the video output of a computer via a capture card, then stream the screen to the wall. We could already stream HD video, so this was certainly a workable solution, but required very expensive (at the time) capture cards that tended to use proprietary codecs. It would also take enormous network bandwidth to stream a high-resolution PC screen. While we have this capability in the Hiperwall software today, we decided it was too brute-force and inelegant (and expensive) for the time.

I decided to use software to capture the screen and send it to the HIPerWall. I developed the ScreenSender (later changed to HiperSender or just Sender) in Java so it can work on Mac, Windows, or Linux, yet have sufficient performance to provide a productive and interactive experience. While the original Sender was fairly primitive and brute-force, today’s Sender software can send faster than many Display Nodes can handle and uses advanced network technology that lets us have tens of Senders displayed simultaneously without seriously taxing the network.

We also started to improve the usability of the software. Initially, the software could be operated by a few key presses, but as we got more content and more capabilities, we knew we had to make a user interface. Sung-Jin and I defined an interface protocol and made a graphical user interface to allow users to choose content to display and to view and changed object properties for displayed objects.

So we had this powerful software that was starting to gain attention. First, the Los Angeles Times published a nice article on the front page of the California section, followed by a radio interview I did for a radio station that broadcasts National Academy of Engineering content, and culminating in a CNN piece that was repeated around the world.

Somewhere around this time, Jeff Greenberg of TechCoastWorks came along to see if he could help us form a company. Because he has been in the computing technology industry for years, he was able to guide our efforts to make the software easy to use for commercial purposes. Around the end of the year, Samsung became interested in licensing our product, so the real software effort began. While it is okay for research software to crash (in fact, if it does, you can claim that you’re pushing the edge), commercial software has to work as expected, and in this case, 24 hours a day, 7 days a week, for months at a time. Therefore, any memory leaks that would have been okay for a short run in the lab were not acceptable, nor were crashes in corner cases. We also had to work hard to improve performance. In the HIPerWall, we used PowerMac G5s with 2 or 4 processors each and advanced graphics cards (for the time). This was a pretty nice environment for our software, but embedded PCs in Samsung’s monitors were not quite as fast and had significantly less graphics horsepower. We used a small 2×2 Samsung wall as a test bench and made the software sufficiently robust that we demonstrated it on a huge 40 panel wall at the Samsung booth at the Infocomm show in Las Vegas in June 2008. We also had to make the software multilingual, which is not as easy as it sounds, even with Java’s support for Unicode characters. The Samsung-licensed version of the software supports 8-10 languages.

Choosing features to develop has changed from making what we think is cool to making things that will help customers and help sales. Our software still handles gigapixel images with aplomb, but for the many control rooms and network operation centers (NOCs) that use Hiperwall, the popular display objects are Senders (for monitoring whatever it is being monitored) and Streamers (to keep an eye on CNN and the weather). For digital signage applications, regular images and movies are popular, along with Streamers and Senders. In order to coordinate these complex display layouts, we provide a way to save the state of the Hiperwall as an Environment, which can be restored easily.

We also added a Slideshow feature that can contain any of our object types with variable timing. It can even have overlays of a company logo, for example. This feature is popular both for digital signage (step through products, etc.) and control rooms where there may be more information than can comfortably fit on the wall at a given time. (Though the right answer is to buy a larger wall! 😎 )

In response to customer requests, we added scheduling capability to show different environments at different times on different days, etc. UCI’s Student Center Hiperwall system makes tremendous use of the scheduler for their very artistic content.

Another example of our responsiveness to customer needs comes from the large Hiperwall-based Samsung UD system installed at the Brussels Airport. They were using 3 infrared cameras to view passengers along the walkways then show the streams on the tiled display along the walkway, as shown below. One camera was on the opposite side, so the video needed to be flipped horizontally. They used another computer to do the flip, which added some delay. Since such a flip is trivial in today’s graphics cards, we added flip options to the Streamer software, thus eliminating the need for extra hardware and delay.

 

With our next release, we will add many more customer-centric features that will make Hiperwall significantly more powerful, secure, and collaborative, but I will not comment on any here until they are officially announced by the company.