For behind the scenes pictures, stories and special contests, follow us on Facebook!

OhGizmo Exclusive: Interview With The Heliodisplay Inventor

By David Ponce

A few days ago, we broke the story that the Heliodisplay was finally being produced and sold. The news took the geek world by storm. For those of you who haven’t heard, the Heliodisplay is an interactive tactile 3D display that projects a floating image in the air, a little in the same way as R2D2’s projector.

After the dust settled, quite a few questions were left unanswered. In a bid to do some real journalism, we contacted IO2Techonolgy and managed to schedule 15 minutes with Chad Dyner, the creator of the Heliodisplay.

So what follows inside is a transcript of all the pertinent parts of the phone interview. You’ll read about the inspiration behind the invention, classified technology, military applications and yes… even prices.

So let me start by congratulating you for finally turning what had seemed like a dream for all these years into reality. When we broke the story, a few days ago, it was picked up by just about everybody and then you guys were Slashdotted to death. Your site was down for a while… how bad was it?

Well, we actually had to purchase 1.2 Terabytes of bandwidth because of all of this.

Oh My God, and I thought we were doing badly after 5 or 6 Gigs!

No, no, no… 1.2 Terabytes, though to be honest, that’s only about 10% or 20% of the traffic we experienced in 2003 when we initially announced the product.

That’s incredible. Well, the excitement is understandable. This is a great product, and the first question that comes to mind is: What inspired you to do this?

There are two components to that. Certainly the first is generated by science-fiction and thoughts of what the future may hold. Fiction writers can easily conjure up visions that then either get drawn or created in movies and though in reality it takes a whole lot more engineering effort to bring these concepts about, in time many of the technological hurdles are overcome and things that many years ago seemed impossible are brought to life.

But more importantly, I think it was my belief in the importance of being able to emancipate digital information into the real world, and that’s really been the inspiration for me to develop this. Starting out as an architect, I thought there was a tremendous amount of value in being able to project information into space, to be able to have a platform to share information collaboratively. What I mean by that is you can have a client meet with an architect for instance, and discuss a building in development and then being able to actually see it there in front of him. And until now, we’ve been limited to seeing the information on a conventional flat screen.

So this is something new, something that has a lot of benefit, and will hopefully prove so over time.

So then you’re saying that the initial inspiration for this was as an architect’s development tool?

Yes, exactly.

Allright, so tell me, was there ever a point during the development process that you thought this whole thing wasn’t going to happen?

[Laughs] I would say we had those hurdles on a weekly basis, coming either from engineering or from the business end, and it seems like we’ve overcome all of them and we’re still going strong. I would say that right now, we’re at a very good point. Things have gone from mad inventor, to startup, to… you know

Well, to reality.

Right, exactly.

Okay, so let’s turn to the technology. You say it will accept any video source, be it DVD, TV or computer. So, how exactly does the unit interact with a computer?

It’s essentially plug and play. Now, I realised early on that if we were to bring into the world a disruptive technology such as this one, we were going to have to take incremental steps. You have to understand first of all, that the infrastructure in the world right now is built for two dimensional data. So then, as soon as you say you have a display for two dimensional data, then you get different protocols for displaying this data: NTSC, PAL, VGA on the computer and different types of video signals that can be received into our system.

What we’re releasing now is our first generation display on which we’ve used some off the shelf components, such as a projector, which helps a lot with the interface, and we also have a USB interface. So you basically plug it into the computer, we have our own software drivers which the computer will ask you for… and then you’re ready to go.

So the computer recognises it as a monitor? Or is there some specific software that you have to run?

No, the computer sees the unit as a secondary monitor. If you wish to use the interactivity function, simply plug the USB cable and go through the usual driver installation procedures.

And then, how does the interactivity function actually work? Does your finger become a mouse pointer? Do you just point here and there?

Well right now, we have our first generation cursor control. So you can move the cursor around the desktop just as if it was your mouse’s pointer. Then you can send it signals, like click, or double click, just like a mouse. It’s a touchscreen, that’s really what it is.

So how does the unit recognise your hand? How does it know your palm from the tip of your finger, say?

Well, we developed an optical tracking system that has some pretty fine resolution and granularity to it and so it’s able to pick up very subtle movements and know exactly where your finger is and what you’re trying to do.

So then basically, you have a camera looking at your hand, interpreting your movements and translating into input.

In an oversimplified way, yes.

A lot of people have expressed concerns that this technology is not really three dimensional. To me, this makes sense since all the data we have is geared to contain only enough information to display two dimensions. However, if we were to develop applications that generate data streams that contain enough information to display in three dimensions, would this machine then be able to display in 3d?

Not the current version. This is our first release.

Any plans then for a true 3d version?

No comment on that.

Okay, so then, you say on your website that the viewing area is restricted to about 150 degrees. Now is this a software restriction, is it due to the technology?

Let me backup. You first have to understand that we’re developing some core technology, most of which is not available to the public. A good portion of it is actually classified. It has applications for the military and a lot of other enterprises. So what we’re releasing right now, again, is a first generation display, which is only a commercially available product, aimed at a certain market. We see it for instance as a wonderful trade show advertising tool.

So then this is really just the tip of the iceberg as far as the technology is concerned?

[Laughs]Absolutely, absolutely. You can think of this as DOS 1.0, or maybe even the Wright Brother’s bicycle parts airplane. That’s what we have right now, in relation to the potential this technology has.

Okay, so are you saying that we’re only getting this little because you’re only allowed to release this much to the public, or is it that you’ve not exploited the technology to its full potential yet?

I would say it’s a combination of both.

Right, okay, so could you tell me about the physical dimensions of the units?

Sure. We build units as small as a lunch box or a conventional phone and as large as a small fridge.

What’s can a large one do that a smaller one can’t?

A small one will have limitations of how long it can run continuously for example and the image will be a lot smaller. Of course, these were the prototypes, not yet commercially available. We now are confident in our knowledge of what needs to be done to scale things down, and improve efficiency and will get those done in time.

Allright. Now, a lot of people were confused by your explanation of just how the machine gets the lasers to display the image. What exactly is a laminar layer, for instance? How does it work? Can you elaborate?

Basically, we are creating a thermal differential within the air and as soon as you do that, the air goes through a process of rapid condensation. So what we’re doing is transforming the air in a very localised fashion.

So this is making use of the same phenomenon you see when airplanes leave contrails high up in the sky, then?

Exactly. That’s really as much as I can elaborate on this point. I will say though that it doesn’t put anything in the room that wasn’t already there and it doesn’t remove anything. It does what it does very locally.

Right, okay, so then my fellow geeks and I will probably not be thinking of purchasing one of these things, right?

[Laughs]Right, right, well our pricepoint is in the tens of thousands. The base unit is $18,000 and the larger, 42 inch unit is $28,000. We’ve already generated a number of sales to corporate customers, though mind you, we did sell two units to a student. Of course in the future, we certainly see developments for gaming applications.

[Laugh] That’s what I wanted to hear, Chad. Allright, well thank you for taking the time and I wish IO2technology the best of luck.