Much real progress in the software field has been made by brilliant people who just don't give a damn about the commercial aspects of software development and deployment. In fact, some of the best software developed by a single person is often superior, and more useful than similar software developed by an entire corporate coding team. This is not to put down software development teams, but in this arena, I've discovered that simply throwing people and money at the problem is no solution. It takes talent; talent that is often not associated with traditional coding. I think we were successful with the development of v-people because the art has always come first.
My team of coders reported to a coordinator who had some coding talent, but his main background was graphic design and animation. I was the creative/art director, one step removed from the programmers. It worked out well because we spent a lot of time laying out the overall impression and function we wanted in the final product. Taking it one step at a time, I would give feedback to the coders who often "didn't get it." They could give me what I wanted, but the just didn't see the need for it. It's a difference of approach between pure function and affective engagement. Where I see the affective component as the most important, many see it as entirely superfluous Unfortunately, it is only superfluous for a small segment of the user population. So designers need the big picture. You are not designing virtual human interfaces for yourself, you're designing them for the general population.
In order to maximize your success I suggest you design your interfaces to be self-adapting to the user's cognitive style as explained in the book. With a few well designed interactions you'll know if the user would prefer a standard no-frills interface to a helpful v-person. They should then offer such an interface and then switch to it. Actually you don't even need to do the probing first, just offer a choice right up front. I've just found that by giving people a taste of the v-person interface can be a good thing to do. They may or may not like it. That can be quickly determined and adjustments made, but at least they got the exposure.
Get A Jump Start
The A.L.I.C.E. Artificial Intelligence Foundation has been exceptionally active in developing some of the very best Natural Language Programming I've seen. In fact, Dr Richard Wallace, who is the father of ALICE has won the Touring Medal twice. Everything these folks develop is available free. You can download source code and build really superb Virtual humans as is, or you can modify the code in order to have your bot perform specific functions. If you're not a coder, you may have some difficulty with the ALICE approach because it is fairly technical and there is really only one book on the subject, written by Dr. Wallace. Even with the book, some elements may remain a bit abstruse to the non-technical type. If you do grasp this kind of thing, though, ALICE can give you a massive jumpstart in building a corporate virtual human interfacing program.
Sharing is Good for the Cause
By Sharing resources, we can move ahead more quickly. Each of us will be good at different things, but will have access to everybody's talents. Sharing can certainly reduce the paranoia that's plagued the Virtual Humans movement since the beginning. I was part of that. While at Virtual Personalities, Inc. I was always worried that someone would steal our ideas and our code. Well I've seen the light. We all need to work together on this. At the Virtual Humans conferences you could just feel the paranoia. People giving presentations would leave out critical information and few places had downloads back then.
One way to share is to send us your best personality scripts. I'm going to see if I can get a contest going with a great software prize. If you build your own neat and easily scriptable engine, share it with us and we'll give you feedback. But make sure your ideas are protected. I'm researching that as we speak and I'll post an article on it here. There are some links on the ALICEbot site that may get you started.
If you write an article that would be of interest to us, please send me the link and I'll put a link here on the sharing page.
Let's Start with Things We Need to Invent
One of the things we can do to help stimulate thinking is to make suggestions of what needs to be done. As an example I'm making a short list of the things that would help me build better v-people. Just about everything on the list is doable by a bright person with the ability to creatively and dogmatically chase the solution.
The ability to "notice" things. That is, in a conversation the v-person might notice that you tend to say "please" a lot and might comment on it. A robot, might "notice" that circles and spheres look a lot a like, and thent ask you to explain the difference. This is not as far fetched as you might think because there is much industrial code designed to "notice" flaws in production line items or in sentence structure. If the v-person is in the habit of identifying things it encounters, it can "notice" when it comes across something it has no identity for. There is visual software for identifying designs from letters to geometrical objects and even specific people, just about anything. The difference between identifying and noticing is significant. In the latter, the v-person observes similarities and differences (and we know how to do this with comparator code) between what it perceives and what it already knows. It's easiest to describe this visually, but in practice it can be conversational items, as well as sensor perceived items.
- Ability to a attach meanings to words. For example, if she hears a new word she could ask you a set of questions that would allow her to assign values to that word, not just a definition. She might define the word Planet as: spherical, cold, distant, astronomical, hard, cosmic scale. She might classify a golf ball as:"spherical, neutral, proximate, recreational,hard, human scale." Combining both these new abilities, it might notice that planets and golf balls have two characteristics in common. She might mention this and ask the user for more information on differentiating the two. Eventually she should be able to build up a conceptual database of spherical objects. If she does this for everything, she might create her own vocabulary and word usage matrix. This leads to the next idea....
- Concept matrix - we need a really good way for virtual humans to build up concepts against which they can compare this world as they perceive it. They would use these concepts to make predictions and ask questions. It's not an insurmountable leap to go from that to concept visualization. That is, they would size up a situation in terms of concepts they understand and then make predictions about out comes, obtain clarification information and then take action. They should also be able to combine information with concepts in reasonable ways to come up with unique new concepts which they can test. Ultimately v-people must be able to perceive the world, compare what they perceive with internal visualizations (I use this term loosely) and then take appropriate action. That action might be to clarify and enhance their "understanding" of a situation. It might be to comment or make an observation. Or it might be to take behavioral action that as appropriate. Thus instead of operating via a large set of pre-defined NLP rules, they would begin generating at least some of their behaviors autonomously.
- Ability to learn by direct verbal input- Tel Monks and I (mostly Tel) were able to create an engine that was able to learn things that you explained it. If it doesn't understand, it should be able to ask for clarification. This is more than just learning a new response when it doesn't have one. It's the ability to acquire knowledge, fit it into a logical, hierarchical matrix from which it can both retrieve the information and make comparisons with other information. See next suggestion.
- Generalization -- Tel and I have also been able to create a rudimentary system able to "conceptualize" in much the same way as taxonomists build their relationship trees. The nice thing is that she could generalize. For example, as I defined "Human" for her, she could apply general human characteristics to anyone she identified as human. For example I could ask her: "Tasha, do you know how many fingers I have?" and she might respond "Peter, you probably have ten fingers, most folks do. (beat) Wait; did you loose any fingers along the way, or do you have a relevant birth defect?" "No." Well then you have ten." This actually is a intelligent behavior that combines generalization and clarification based on rules she's learned about fingers -- they can be lost and the number can vary from ten if there are relevant birth defects present. Rules of logic serve well here. (Human fingers = 10 if NOT-lost AND NOT-birthdefect ELSE CLARIFY)
- Plug-in Architecture - We need a simple way of plugging in different faces, personalities, languages, knowledge-bases, courseware, control capability, sensors, you name it. If we're smart, will start now to create architectures that allows this kind of easy, standardized adaptation. There is a lot known about how to design and build plug-in architecture, much of it comes from the graphics packages from Adobe Photoshop to AliasWavefront's Maya. They also come with scriptable languages, much as our virtual humans do. Perhaps they'll always be close relatives. So far as I can tell, no one has really done the work necessary to launch a virtual human engine with such architecture. BTW, I use the Term virtual human engine rather than Natural Language engine because it could well be a neural net or some other AI approach that is still a virtual human engine.