Friday 29 August 2008

Face Book

I picked up an interesting book at Oxfam. 'The Human Face Reconsidered', by John Brophy isn't particularly scientific, (any science that is present has probably been discredited, since the book was written in 1962), but it is very poetic.
I've just been reading the section about eyes.

"It is through the eyes that we attain the most intimate communion with other human beings and step nearest to the ultimate mystery which locks us, each separately, while life endures, inside the prison-house of one body, whence we may shout and listen to other prisoners, and out of which the only peep-hole is the eyes."


If we could look each other in the eye when we talk at a distance, rather being forced to stare at each other's navels by our screen-top webcams, we might enjoy a more intimate communion, and step nearer to the ultimate mystery. Or at the very least, have a better meeting.

Wednesday 27 August 2008

A world within a world

Just had a thought that I need to capture before it escapes:
Could you run OpenSim as a shared application inside Wonderland? This may seem like an insane idea, I realise. Why would you want a 3D environment inside a 3D environment?

Well, I'm thinking about the draft evaluation of the first pilot by Steve and Marga, and one of the important issues that has been identified is the need to know when to do 'distance', and when to do 'blended'. I think that OpenSim standalone is just right for teaching building skills in a real life blended learning situation (Ian & Graham tutors), and Second Life is best done at a distance (Cubist and Kisa mentors).

Wonderland is closer to a blended learning environment, in that you are your real life self, speaking with your real voice, and you can interact properly with an application (interface elements and all) like you do in an I.T. lab. That's why I think it might be interesting to have OpenSim standalone as a shared virtual application. I can teach some building skills at a distance without it getting muddied by the whole role play and social complexity thing.

Well, it's just an idea.

Monday 25 August 2008

Browser Tutorial

Quick note to Open Habitat project:
The new Browser Tutorial that Linden Lab are introducing as an alternative the to current Orientation island/HUD solution will need to be considered in relation to our first pilot. I sense new opportunities for smoother, better designed noobs inductions. Need to test.

Wednesday 20 August 2008

PC News 2012: Thousands sue for repetitive neck strain injury

Now then. A 3D head tracking controlled operating system. That would be interesting. Webcams are pretty widespread, so it would be easy for Apple or the Linux mob to create a head tracking powered 3D desktop. It would seem that many of the benefits of stereoscopic displays can be provided by a realtime head tracking, but at a fraction of the cost and/or inconvenience (nobody really likes wearing those glasses for very long, however spectacular the stereo vision is.).



Now imagine that a friend's 3D head-avatar (see previous posts) pops up on your 3D desktop (well, he'd have to knock first). He can see what's on your desktop, and his 3D head rotates around so you can see which bits he's looking at. You can show him how to do something cool in photoshop, or show him something on the web, accompanied by a VoIP-conversation. Click a switch and you're looking at his screen, and he can see your 3D head-avatar on his 3D desktop.

Head tracking enabled MMORPG

This is the sort of thing I'm on about:

I'll have a search and see if anyone has done it with Second Life yet.

Augmental reality

AR 5
I've been messing about with the open source, cross platform ARToolKit. This uses your webcam to detect the relative position of a special pattern that you print out. I've also just stumbled on this face detection API that does a similar thing, but without the need for a special pattern (other than your face). The second video on this page suggests an interesting creative possibility. The API is a low level C library, so it would probably be fairly easy to integrate it into a hack of the Second Life client. The idea is that you would be able to tilt your head to see round objects slightly. This would be a massive help when building things in world, as you would get a much better sense of the relative position of objects in the 3D space.

Imagining a future augmentationist virtual world

OK. Time for a bit of abductive reasoning. I'm imagining a future shared virtual environment for the augmentationists.

I'm assuming that Second Life has become the best immersionist solution - great for role-play (fantasy identities, not spoilt by voice), content creation etc. So no need to bother trying to compete in these areas. The thing that I'm imagining is a 3D conferencing tool, with voice as the central communication device, and shared applications such as browsers and whiteboards to facilitate discussion, ideas generation and collaborative working.

So, let's pretend I'm a student. My tutor has given me a web-link to a Java Web-start application. The first time I run it, it checks my PC spec and installs all the necessary bits on my computer. My user ID (taken from BANNER or whatever standard is in place for student IDs) is already on the database, so I just need to log in using this and my usual password. As this is the first time I have logged in, I need to first create my avatar. The application checks to see if I have a functioning webcam, and if so, allows me to take a snapshot of myself. If I have no webcam, I have the option of uploading a mug-shot instead. I click a few points on the snapshot to calibrate my face, and click 'Generate'. A 3D face is created and uploaded to the server. Now I log into the virtual world, and my avatar is my 3D face (do we need bodies in virtual worlds? I went to a virtual reality conference about 15 years ago, and one of the speakers was dead against avatars having legs).
As I wander around this 3D world, I see other faces that I recognise. I wander up to them and say 'Hello!', with my voice. They say 'Hello!' back, with their familiar voices. I click on a 'Smiley' button and my 3D face smiles at them. I can see what they are looking at by the orientation of their 3D faces. We walk up to a giant web browser and I key in a URL. My friend admires my new artwork that I have navigated to via the browser. He clicks on an 'impressed' button and his 3D face's expression morphs into an impressed looking version of himself. He has an idea, and draws it on the whiteboard next to the browser. We agree to meet up for a drink later to discuss our new ideas.

Looking a little further into the future, instead of a fixed 3D face, my 3D webcam places a live hologram of me into the virtual environment, massively enhancing communication via non-verbal cues. I interact with the environment by waving my arms about, maybe.

3D faces from mug-shots








I've found a great tool for generating 3D heads from a single mug-shot. (Click on the 3D face and then move the slider). The one above was created from a standard Photo Booth snap from the built in webcam on my MacBook. The processing is all done via a web interface, probably server side. It's a bit useless at the moment, as all you get is the thing above, but there are plans to allow export of the meshes and textures. I think some of the other augmentationist commercial virtual worlds might already be using something like this (Twinity?). It would make a lot of sense to see something that actually looks like the student you are dealing with in an augmentationist environment, rather than the spooky doll avatars that exist currently. It might be a good intermediate stage until the technology speeds up enough to allow the sort of live 3D video that I mentioned in my last post.

Thursday 14 August 2008

Realtime 3D face scanning

You can imagine that a low cost, low quality version of something like this:

http://www.youtube.com/watch?v=DiY45jALWjE
might create some interesting creative possibilities in a shared virtual environment.
I've found a few 3D scanning systems like this that use projected bands of light. The physical components of these systems tend to be off-the-shelf digital cameras and data projectors, with most of the clever stuff happening in the software. This suggests that the future price of similar systems will plummet as the home-brew crowd reverse engineer the software (Patents de-pending). If a low-cost device could be manufactured to project the correct pattern of light onto your face, your webcam could capture your face, and the virtual world client could perform the necessary processing required to beam you into a shared space.
This systems would allow users in multi-user virtual environment to present themselves in the form of a realtime 'hologram'.
It would be interesting to see how the eye-contact problem present in video conferencing systems, translates in such a holo-conferencing system.

Sunday 10 August 2008

Future-avatar dance party

In the future, the Augmentationist's avatars will look like this:
http://www.spiral-scratch.com/index.php?page=gamecam3d

Why does an avatar have to look like a Pre–Raphaelite painting? Cubist tries hard to look cubist, but Ian would prefer to look like an impressionist painting. Vibrancy and life at the expense of detail.

Imagine a shared space occupied by the live 'holograms' of real people like the one in the link. How would a virtual meeting be different if you could virtually shake hands at the start? Maybe people could learn dance moves together. (Dance distance learning?). The lack of authentic body language cues in conventional conferencing systems is a major problem. Slightly glitchy, but undeniably alive avatars might give off all the right signals to improve our chances of accurately judging mood.

The scary thing is, this could be actually quite straightforward, technically. We should play.