Friday 9 December 2011

The ability of objects to interpret unusual data: AUDIOVISUAL INTERPRETATION

There is a huge range of information that could be collected from normal human interaction that isn't easily formatted by computers i.e. movement, speech. 

HCI has taken some massive steps forward, with the incorporation of new ways on communication i.e. movement and 3D scanning and projection. This data could be collected and interpreted.


Gesture recognition: how it works.

a gesture recognition system can be made of different components

•  Gesture Modeling
•  Gesture Analysis 
•  Gesture Recognition
•  Gesture-Based Systems and Applications 

These can be modelled in 3D space, and reinterpreted.

Speech is also something that computers have been used to analyse. Audio recorders and interpreters were utilised in the following study to detect how many words men and woman said respectively on average per day.


Women are generally assumed to be more talkative than men. Data were analyzed from 396 participants who wore a voice recorder that sampled ambient sounds for several days. Participants' daily word use was extrapolated from the number of recorded words. Women and men both spoke about 16,000 words per day.

Findings
Sex differences in conversational behavior have long been a topic of public and scientific interest (12). The stereotype of female talkativeness is deeply engrained in Western folklore and often considered a scientific fact. In the first printing of her book, neuropsychiatrist Brizendine reported, “A woman uses about 20,000 words per day while a man uses about 7,000” (3). These numbers have since circulated throughout television, radio, and print media (e.g., CBS, CNN, National Public Radio, Newsweek, the New York Times, and the Washington Post). Indeed, the 20,000-versus-7000 word estimates appear to have achieved the status of a cultural myth in that comparable differences have been cited in the media for the past 15 years (4).

AUDIOVISUAL INTERPRETATION

Sound and visuals can be generated from  data using various algorithms. What has not been done yet is having different objects respond to eachother and change the data interpretations.


Aborigines interpret the information about their environment and share this information through song: how can the collection of data be transferred into sound and shared? 

What data would be used? How would it be interpreted? A new way of making music. Can it be made into visual data?

No comments:

Post a Comment