As requested by Johan, I cross-posted this article from my blog
to the MindBlizzard blog as a (pretty long) report on the Virtual World Symposium we attended yesterday. As Johan already introduced the meeting in his previous post, I'll just get started with taking you trough the day:
The Virtual Web symposium started with an introduction of Johan Vermij, who did a good job at outlining the problems the educational system faces now, and will increase in the very near future. The very real problem of information overload was illustrated by the amount of information we produce and consume. Someone reading the New York Times for a week will be exposed to more information then a person would receive its entire life 100 years ago. We have produced more information in the last year, as we did the previous 5000 years, and the amount will double every year for the next decade.
The question as presented by the Eduverse organisation is with this constant river of information, education would still be able to catch up, deal with this flow if content and present and filter it in a meaningful way before it became outdated or obsolete. Combine this with the fact 80% of our cognitive skills are visually oriented, and you get the mission statement of the evening. How can Virtual Worlds contribute to making the information overload a source of value, help index and understand it, and contribute to the educational system? Continue reading below (long post!)
PaperVision3D / Paperworld
We started to look for an answer in probably one of the least engaging and interesting presentations of the day, so bare with me on this one. A visually tired Trevor Burton explained in 4 slides of Powerpoint how the new Flash-based technology of ‘papervision3D’ would be able to create ‘clientless immersion’ - 3D worlds running in your browser. It took about 5 minutes to race trough the technical slides at which point the presentation came to a halt.
When he was reminded he could actually show us the (alpha stage) application Paperworld we saw an Internet Explorer browser window with the scene of a really simple ‘outer-space’ scene, where he could control a space ship with mouse and arrow keys. Trevor said the quality of the visuals were about the standards of the Playstation 1 console. Opening another browser window he could log in a second aircraft and it would show in both windows, demonstrating the ’social’ possibilities of this clientless 3D environment.
The technology itself has a long way to go but obviously had potential. Having your 3D environment directly in your browser removes the hassle of downloading, installing and updating the client, Trevor pointed out the upgrades/updates of the papervision would be ubiquitous, and the software runs on any platform and is completely open source. The link to education remained unclear as Trevor rushed off to some much needed sleep.
The Education Coop
Next up was a Skype video conference, but Skype didn’t want to play ball, ad we ended up with a half Second Life voice conversation on both the Journal of Virtual Worlds Research and the collaborative program of teachers world-wide, ‘The Education Coop’. The Journal was (unsurprisingly) a ‘elearning’ track record, a gathering place for information on interactive learning in Virtual Worlds as ran by Jeremiah Spence.
The Education Coop can best be described as a community for ‘metaverse’ teachers. Joe Sanchez told us in order to join this virtual community in second life, which consists of a virtual Village you need to be verified as a person (name, occupation etc) to keep the community a professional, high quality collaborative program for eduction. The Education Coop organises seminars and meetings to discuss their work strategies and experiences with (Virtual) teaching.
Both seem good examples of the organisation of Virtual Learning from inside universities. These are both ‘teacher/professor’ initiatives to find a way to use Virtual Worlds as an added value to their traditional ways of teaching students.
Peer 2 Peer and Virtual economies
The fourth speaker of the evening was Brandon Wiley, a student in Texas University who has been working with Peer 2 Peer programs ever since it surfaced on the internet. The recent developments in Virtual Worlds going open source gave him new insights in creating ‘virtual’ value.
He explained looking closely at the economy of Second Life, it’s problem with the current economy is the same as the music industry is facing at the moment. Virtual Goods can be copied and quickly become infinite, reducing the effective value per unit to $0. The problem is the economy of Second Life is made by placing money from the real world into the Virtual to buy objects, but when these objects become free, the economy crumbles.
Another economic model is found in closed systems, like games. The economy in World of Warcraft is not created by an outside source, but by user participation. Clicking on ‘monster’ creates gold from the items it drops. Though time is invested in killing monsters, the problem with this model is it doesn’t scale well. The value created only exists within a controlled environment, which is going to disappear in an open source, peer to peer Virtual World Network. One could simply ‘cheat’ the system by adding limitless amounts of ‘gold’ into your own environment, then transfer this to someone elses, thus creating the same problem - a collapsing economy because it lacks value.
According to Brandon, a solution can be found in a game for kids and a system you are certain to have seen before, the ‘Captcha‘ - the question you get before commenting or sign-ups, where you are asked to type in the letters you see in the distorted picture to verify you are real. The game which inspired Brandon was Puzzle Pirates, kids solving puzzles for rewards. The interesting thing about puzzles is that it requires human attention, and human attention (focus) can not be copied, and is limited.
The implication of this is the ‘currency’ of peer to peer Virtual Worlds is attention, being able to retain its value anywhere, in any world/platform. When asked for the relation of this ‘insight’ and eLearning Brandon saw a future for using these puzzles as a new learning process. A direct (instant) reward system for training and obtaining knowledge, and a new way of motivating education.
NASA Learning Technologies
Last presentation before he first break was Stephany Smith of NASA. Stephany showed us what NASA has found in Virtual Environments so in a Powerpoint supported presentation from within Second Life. The work NASA does mostly focusses on visually presenting complex data, and creating direct visual representations of real-time events. Programs already in use are:
- Rover Ranch - A place to learn about robotic engineering. You can learn about the development of robots, their elements and systems.
- Volcano Viewer - A 3D visualisation of real and simulated volcanoes to see their activity or the education process of understanding the way they work.
- World wind- Same as the Volcano Viewer, only focussed on the phenomena of whirlwinds.
- Virtual Lab - A program adopted by Microsoft to explore surfaces on Nano Level
Furthermore she indicated NASA is working on their new eLearning Roadmap, and vision on eduction. The Roadmap consists of the essential ‘3E’ program: Educate, Explore and Experience, and is a way for NASA to interest possibly future employees (children) in science and technology.
Ph.D. D. Danforth shows the ‘Virtual Sperm Tour’
After the break we were introduced to a highlight in showing the potential of 3D environments in teaching complex matters was a demo by Ph.D. D. Danforth of the Ohio State University who build a model of how sperm grows. Apparently this is a very hard thing to explain and visualising this has greatly aided the students in understanding the process.
The education process is a completely animated 3D presentation, supported by text in the chat window. The ‘interactive’ part of the tour allowed students to get a close up, step by step of the process of the growth of sperm cells. The tour concludes in a virtual ‘camp-fire’ place, to discuss the material with fellow students or Ph.D. D. Danforth himself. The response to the 3D presentation are positive, but real results won’t be available until next month, when the students get their exams on the matter, and Ph.D. D. Danforth can compare the results to students who haven’t had the virtual tour experience.
Some details on the presentation:
- It took his students 15-30 minutes to get through the SL orientation
- It took the doctor with no prior experience in VW’s to get settles in SL
- It cost him 6 months of 1 hour a day of work to build his presentation (last 2 weeks 4/5 hours a day)
- The medium of text messaging where for none of the students a problem
Campus Hamburg in Second Life
The German Campus of Hamburg created a platform for Second Life studies, but real life degrees. Hanno Tietgens and Dr. Torsten Reinders guided us trough the virtual harbour. Students where challenged with both building the platform, and eventually providing a collaborative learning platform for students. On the advantages De. Torsten provided the following points:
Trough the process of building the students learn the details of the environment much better, rather then just description or pictures. Insignificant looking details become more obvious and understanding of the topic (in this case the shipping of containers)
Gather international expertise/speakers
Because of the virtual space, the Hamburg campus is able to invite speakers and experts from all over the world, to review the work and assignments done by the students.
Though somewhat dubious, the Hamburg Campus explained it could use the anonymity of the avatars to gather unfiltered information from the students. This could be feedback on the program itself, the content of the courses or even doing ‘ubiquitous examination’ of the behaviour of the students.
Work with things that are normally not accessible in the real world
The shipping process involves heavy machinery, not to mention ships, space and all sorts of other physical complications. By simulating this in the virtual World, the students can control all aspects of a scenario, and operate machinery normally unavailable to them without cost or risk.
Blend of theory, visual scenarios and practise
The interactive visualisation supports the the theory as it is educated within the virtual setting (represented by the virtual office actually being inside the harbour itself). While the theory is explained, someone could directly show it to the students.
The social aspect of the Virtual environment allowed students to collaborate on one task or program seamlessly. One could operate the crane, the other the boat, a thirds be a transport supervisor or harbour master in the same scenario. This kind of collaborative learning has shown to get students much more involved, and learn from each other in the process.
Georgia Tech on Augmented Reality
for me the highlight of the evening was something I had so far only seen on youTube. Jay Bolter (a.k.a. James Lillenthal in SL) of the Georgia Tech university had modified their client in such a way it could be used for augmented reality: Effectively blending the virtual and the Physical Worlds. Several avatars had gathered in a small 10 by 10 stage in Second Life, surrounded by a number of screens displaying these avatars in a scene in the real world, a bunch of piled up Lego blocks.
To create the illusion of Virtuality in the real world you need a camera aimed at the real life scene with fiducial markers to allow the camera to orientate and place the virtual images over the real world in on the video. Though pretty impressive already, Jay Bolter tells us there is still some difficulty creating seamless video and audio, and the process of augmented reality doesn’t scale well yet. Needing both the real life scene covered with the fiducial markers as well as a virtual environment to simulate the ‘bumping paths’ - the process of the computer recognising a wall, or a door you’d be able to walk trough.
Even though Bolter compared the demonstration to the ‘first flight of the Wright Brothers’, his ambition is remarkably similar as well. The goals and application of the research and the technique are as follows:
- The experience of ’shared space’ - mimic physical presence
- Collaborative design of 3D augmented prototypes
- Walk trough historic locations
- Use HUDS (Heads Up Displays) to make the real world more intelligent by a providing Virtual Layer off digital information on people, objects, locations. (Example, looking at the Eiffel Tower will display its meta information such as year of construction etc.)
- Get the technology into the living room of people
On the last point Jay Bolter had some interesting information. He told us he thinks the client of Second Life used to create the augmented reality experience will be available in the summer this year. The hardware used (glasses and displays) is no longer dependant on technical mechanisms, but depends on an economic system, seeing as they projector glasses are currently in between 100.000$ and 10.000$, but could be available for little over 100$ once mass production sets in. If this is to happen any time soon it will be because of gaming applications Jay Bolter concludes his most impressive presentation.
Virtual World Teaching Programs
After the second break we resumed the symposium with Dr. Yesha Siwan. A much respected metaverse thinker who has created a program to introduce students into Virtual Worlds, and a course on how to set up an eLearning process for these students. In an estimate of 13 lessons, starting with understanding the interface, onto building in the virtual world, customising the avatar onto business models and understanding the communication inside Virtual Worlds.
Dr. Yesha Siwan uses the following description of Virtual Worlds (such as Second Life, where this presentation was given)- the ‘3D3C Metaverse’. This means a Virtual World has to be three dimensional, have a community, allow (user) creation and commerce in order to be part of this Metaverse.
Philosophy about the educational system
Philosopher Ph.D Rhett Gayle took us back into the real world. That is to say, the world of philosophy. Trough a Skype/video conference Ph.D Rhett Gayle challenged the audience to define the role of the educational system. The goal of education. Though no real consensus was reached within the audience, when he confronted his own students with the same question 80% of them answered ‘to get a job’.
He continued to say that’s the way the ’system’ feels for these students: “like a circus dog jumping trough hoops and getting a biscuit at the end”. Ph.D Rhett Gayle said this is a worrying thought, especially concidering the words of Johan Vermij at the start of the symposium - the phase in which information is produced and becomes obsolete. He concluded the these two things lead him to believe the process of education is more important than the content of the lessons themselves.
The process of today can be influenced by the students, but the students are not thought they are able to change things in this new day and age. These are the same students who felt a world without jobs would be a dystopia rather than a utopia, a world they wouldn’t want for themselves. On the other hand, the 20% who didn’t feel the goal of education is ‘to provide a job’ thought a world without work would be liberating, a new freedom of the future.
The idea behind these thoughts was, as far as I could gather - looking at the future we need to evaluate the role of education as a process of relaying information, and the approach towards the students who feel less and less inclined to learn, instead of having a motivation to acquire knowledge driven by a passion on a certain topic.
Sensory replacement- Seeing trough Sound
Last up was Dr. Peter Meijer in a ‘live’ presentation ‘Sensory Replacement’. A fancy sounding term for turning vision into sound and back into vision again. The presentation was more than impressive, introducing a technology to allow the blind or visually impaired to recieve images trough audio. Here is the The way it works is it takes a 2D black and white ‘frame’ of the video mounted on the glasses, and the software called ‘The vOICE’ translate this ‘pixel by pixel’ into a sound.
Left and Right
Video is sounded in a left to right scanning order, by default at a rate of one image snapshot per second. You will hear the stereo sound pan from left to right correspondingly. Hearing some sound on your left or right thus means having a corresponding visual pattern on your left or right, respectively.
Up and Down
During every scan, pitch means elevation: the higher the pitch, the higher the position of the visual pattern. Consequently, if the pitch goes up or down, you have a rising or falling visual pattern, respectively.
Dark and Light
Loudness means brightness: the louder the brighter. Consequently, silence means black, and a loud sound means white, and anything in between is a shade of grey.
Though a remarkable technology by itself, I didn’t see the direct implementation in Virtual Worlds or education, except for the fact The vOICE trains the brain in a new way of recognising objects.
And so, after over 7 hours(!) of presentations the symposium ended with an ‘after-chat’ and some much needed beer. I really enjoyed the symposium and think bringing these concepts together (especially the ones a little outside the realm of Virtual Worlds such as the last 2 presentations) provoke really interesting thoughts, and developing the educational system - and ways Virtual Worlds can contribute in this process.
P.S. Thank you Frank and Stephan for the pictures/photos, and the entire 7 hours can be seen here.
Labels: amsterdam, debalie, digado, Education, eduverse, virtual education