Together with collegues from London and Tokyo neuroscientist John-Dylan Haynes did an experiment (however up to now only with 21 test persons as it seems), where a person had to choose wether he/she wanted either to add or to substract two numbers. And even before the test persons saw the numbers and before they started to compute it was possible – by using a MRI brain scan – to tell with a 70% chance, what kind of desicion the person was going to make, or in other words: using the MRI the scientists could “read the mind” of the test persons (with a 70% chance). Freely chosen decisions are usually happening in the prefrontal cortex.
Archive for the 'computer vision' Category
The face of a human (lets include the ears) is the part of a human body which is usually adressed first as an interface to the human mind and body behind it. And most often it stays the main interface to be used by other humans (and animals). After a first contact people may shake hands a.s.o. but still the face is usually the starting point for facing each other and together with subtle gestures it can give way to a very fast judgements about the personality of people.
So it is no wonder that a portrait of a person almost always includes the face. Faces usually move and the movement is very important in the perception of a face. However in a portrait painting or a portrait fotograph there is no movement and – still – portraits describe the person behind the face – at least to a certain extend. It is also a wellknown rumour (I couldnt find a study on it) that a drawing reflects the painter to a certain extend, like e.g. fat artists apparently tend to draw persons more solid then thin artists a.s.o.
So it is no wonder that people try to find laws, for e.g. when a (still) face looks attracting to others and when not. Facial expressions (see above image) play a significant role (see also this old randform post). But also cultural things etc. are important. But still – if we assume to have eliminated all these factors as best as possible (by e.g. comparing bold black and white faces of the same age group looking emotionless) – then is there still a link between the appearance of a face and the interpretation of the human character behind the face? How stable is this interpretation, like e.g. when the face was distorted by violence or an accident? How much does the physical distortion parallel the psychological?
An analytical method is to start with proportions, where there are some prominent old works, like Leonardo’s or Duerer’s studies, leading last not least to e.g. studies in artificial intelligence which for example link “beautiful” proportions to the low complexity of the corresponding encoded information.
These questions are a bit related to the question of how interfaces are related to processes of computing, also if one doesnt just think of robots. It concerns also questions of Human Computer Interactions as we saw above and finally Human Computer Human Interactions, which were thematized e.g. in our work seidesein.
At f.wish you can hang your personal and public wishes (e.g. for next year) onto a tree and read those of others (see above).
f.wish has a nice spongy letter-from-spring-gravity simulation (with the partial use of the traer.physics library for processing). Sean Carroll of the physics blog cosmic variance was just discussing physics and in particular gravity in games like this (partially physically uncorrect) game but also a Ninja game and the book “physics of the buffyverse” (seems to be similar in intention to this book) were topics in his post.
Another gravity game has a possibility to change directions while flying but no walls and thus you may get lost in space easily. And there is a dial which shows you how far you are lost.
I finally managed to translate my article for the conference proceedings of the NMI2006 conference from german into english. There are a few additions, which are not included in the german version.
The article is a description of our installation seidesein. It gives an account on our motivations for creating seidesein but it explains a bit also our motivation for other daytar works.
I am very grateful for any feedback on this article.
A new service from the viennese company systemone:
Retrievr lets you find flickr images by drawing rough sketches of them. Finding images on Flickr is mostly textual (tags, keywords) or social (contacts, friends, groups). Retrievr is, like images, visual. At the same time it’s our testbed for image retrieval algorithms, so that when you add an image to a page in System One, it gets you the potentially most similar pictures back in realtime.
It was not fair to test the retrieval with my above flower image (big one to the left) , as it bears a lot of fine structure….and consequently I got the above results back (images to the right)…:)
see also this related old randform post
Optical character recognition, usually abbreviated to OCR, is computer software designed to translate images of handwritten or typewritten text (usually captured by a scanner or a digitizer) into machine processable text. OCR is e.g. commercially used in PDA’s However “handwritten” characters do not need to be constrained to letters or simple symbols but could also be more complex shapes, if necessary also in 3D. The recognition of such shapes can also be interpreted as gesture recognition.
A classic in midi-to-graphics “generative” animation is “pipedream” by Dave Crognale and Wayne Lytle. It is sold together with other works by them on a DVD compilation via their website animusic. The “pipedream”-video itself is however also downloadable via the SIGGRAPH animation site on archive.org. However if you have an ATI graphics card you can render it also in realtime via the ATI rendering-gadgets sites for MAC and Windows.
Wayne Lytle has worked also in scientific visualization, e.g. on this mathematical visualization video for string theorist Brian Greene.