So this morning I found a nice message in my inbox telling me about the voice beta program. Unfortunately I had to work and couldn’t find time to test it out immediately. Later I did though, finding just moo who went to bed a bit after leaving me with an empty testgrid😉
In the evening and while the main grid was down I found more people to talk to with (like Mark Barrett, Jane Calvert and many more) and I must say this rocks!🙂 It now feels more immersive than just having a Skype call beside SL thanks to the notification of who’s talking (I mentioned in my last post that they eventually were lacking this feature but gladly they aren’t) and the 3D sound.
So for those of you not able to experience it, here is how it works: You plug in your headset, enable voice chat in the preferences (which is the default) and you go to a voice enabled region. Then if you talk you will see a green waveform above your head, notifying you and the rest that you are the one talking (notifying yourself more that it works). Optionally you can also drag the “Speech Gestures” folder from the library to onto your avatar to enable voice gestures (which are right now quite silly gestures letting you look like a robot). They are triggered if you are talking depending on how strong you speak (means volume). So the next thing I am planning to is to get my talking head animation from my vlogger kit into these gestures as this might look more natural (I might even add the move-microphone-to-mouth-animation🙂. OTOH Linden Lab is working on better gestures I think.
If people walk around you and talk you also notice that the position of their voice is changing between the two channels so you hear people left from you really more on your left ear. Also you hear people farer away not as loud as people close to you. What is not working is true spatial sound though which means that you cannot differentiate if somebody is in front of you or in your back (or flying above you). You can also distinguish left and right. Having true spatial sound (done with little phase shifts I think) is probably not that easy but it would be a cool addition as it would make the position of people more accurate (sometimes I still had problems finding out who’s talking esp. if their microphone was too sensible and they had a green waveform above their head all the time thanks to breathing or computer fan noises).
Performancewise it seems also quite ok. With lots of people talking there seemed to be no notable problems and also the sound quality is quite good.
What is missing yet is a way to mute people. So if somebody comes close and puts on some awful noises you cannot do anything against it. The same topic are general moderation controls for land owners. If you’d have them you can easily to spontaneous events without the need for shoutcast servers and the like but of course you need to be able to e.g. mute all people except the one giving a talk or so. If this would be implemented it would mean though that you then have more control over speech than you now have over chat text😉
What they also promised (but is not yet there it seems) is the possibility to have person-to-person communication (like normal telephone calls or IMs) and group conferences (as I’ve heard up to 1000 people should be supported then but I hope not all of those 1000 will be talking at the same time).
And then there’s one annoying thing IMHO: The audio position is attached to the camera (means: your ears are attached to it) and thus if you zoom around you will listen to people to where you zoomed but not to those around you anymore (and I like zooming around when meeting with people to see where I am or what’s happening around me). This is slightly annoying IMHO and should maybe be a preference setting to keep your ears at your avatar.
So all in all it’s a great addition to Second Life and I cannot wait to have it deployed on the whole main grid!🙂