I am currently looking for computer tools which eventually could adress two projects, which were mentioned in two seperate randform posts.
One project was that I was thinking of eventually putting up a small communication platform/database about materials and chemical/physical processes where sustainable product designers/artists/scientists may exchange their knowledge. I will write about that later.
Another project was a kind of massive player roleplaying game prototype for economy which I suggested in a first draft in this randform post. I am thinking about this issue on and off and in particular I am thinking about how the implementation of such a game into an academic context could possibly influence the technical needs of the electronic platform suggested in the scientific platform article. It is in particular not clear to me how much automated emotional/perceptional/cognitive intelligence should be incorporated in such a game and/or in the platform itself.
So I was looking a bit into several HCI projects. On one hand automated (emotional/perceptional/cognitive) intelligence can – especially in conjunction with games – be e.g. used to identify cognitive models, i.e. simply put: used to identify major components of human emotional/perceptional/cognitive intelligence and thus make a quantitative analysis eventually easier. (As an example for the use of such an automated intelligence see e.g. some of the ICT projects, already in mentioned in this randform post or the finished project eCircus (see e.g. here or here)) . On the other hand automated intelligence may impair the actual human communication which should take place in such an environment.
So while browsing I was stumbling upon older projects like the OZ project or the Cogaff project at the University of Birmingham. A lot of those projects at earlier times followed mainly a “broad but shallow” architecture, from the Birmingham website:
Like the OZ project of Bates and colleagues at CMU (see below), we aim to start with “broad but shallow” architectures. That is, the architectures should accommodate and integrate a wide range of functions, such as vision and other forms of perception, various kinds of action, motivation, various kinds of learning, skilled “automatic” behaviour, explicitly planned behaviour, various kinds of problem solving, planning, self-awareness, self-criticism, changing moods, etc.
A “broad” architecture contrasts with “deep and narrow” systems, like most AI systems, e.g. systems to analyse images, or understand sentences, or solve mathematical problems, or make plans, etc.
It may be necessary for a while to tolerate relatively shallow and simplified components as we explore the problems of putting lots of different components together. Later we can gradually add depth and realism to the systems we build. Shallowness is not an end in itself.
The current research seems now to be rather concerned with the goal to find a “tighter integration of shallow and deep techniques in processing” .
An interesting project in this context was also the HUMAINE Project from the website:
Emotion-oriented computing is a broad research area involving many disciplines. The EU-funded network of excellence HUMAINE is currently (that is it was from 2004-2007) making a co-ordinated effort to come to a shared understanding of the issues involved, and to propose exemplary research methods in the various areas.
(see also their old wiki; Note that in 2007 the project terminated but in 2007 the Humaine association was founded, so the projects website is still somewhat active.)
One of the research tasks of HUMAINE was/is the above mentioned finding of cognitive models. An older article “Emergent Affective and Personality Model” by Lim, M.Y; Aylett, R.S. and Jones, C.M. IVA 2006, LNAI 3361, Springer, pp 371-380 gives somewhat a little overview about the research at that time, like they write about the OZ project:
There has been a series of effort for making artifacts with their own emotional structure. Most of these projects focus either on the cognitive aspect of emotion adopting appraisal theories, or on the neurophysiological aspect. Very few attempts have been carried out to bridge the gap between these two aspects where models such as perception, motivation, learning, action-selection, planning and memory access are integrated.
The Oz project [10, 11, 12, 13] aimed at producing agents with a broad set of capabilities, including goal-directed and reactive behavior, emotional state, social knowledge and some natural language abilities. Individual Woggles had specific habits and interests which were shown as different personalities. Social relations between the agents directly influenced their emotional system and vice versa. However, Oz focused on building specific, unique believable characters, where the goal is an artistic abstraction of reality, not biologically plausible behavior.
The “Emergent Affective and Personality Model” introduced in this article seems to be an extension of the socalled PSI model. It seems though that the model hasn’t yet been implemented in a concrete environment, although it looks like a concrete application for a virtual tour guide in mobiles is planned (judging by e.g. whats here on page 840).
Lim, M.Y; Aylett, R.S. research deals however in particular also with intercultural emotions, which would be of course an important issue in a global communication network.
A nice and very vivid introduction on how different cultural context leads to different behaviour in game-like environments is given in this article by Georg Scholl.
As a matter of fact the psychological findings mentioned in the article by Scholl are intended for integration into the above mentioned PSI model.
So concluding it seems the search for cognitive models goes on and especially the implementation of affects into those models plays an increasing role, from the website of MITs affective computing website (Affective-Cognitive Framework for Machine Learning and Decision Making ; Hyungil Ahn and Rosalind W. Picard):
Recent findings in affective neuroscience and psychology indicate that human affect and emotional experience play a significant and useful role in human learning and decision-making. Most machine-learning and decision-making models, however, are based on old, purely cognitive models, and are slow, brittle, and awkward to adapt. We aim to redress many of these classic problems by developing new models that integrate affect with cognition. Ultimately, such improvements will allow machines to make smarter and more human-like decisions for better human-machine interaction.
Let’s see how these developments will enter the Emotion Mark-up language.
-> related randform post mentioning emoticons