The main objective of the Action is to develop an advanced acoustical, perceptual and psychological analysis of verbal and non-verbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying human emotional states. Several key aspects will be considered, such as the integration of the developed algorithms and procedures for application in telecommunication, and for the recognition of emotional states, gestures, speech and facial expressions, in anticipation of the implementation of intelligent avatars and interactive dialogue systems that could be exploited to improve user access to future telecommunication services.
The purpose of the project is to carry out collaborative research involving development and
analysis of multimodal spoken language corpora in the Nordic countries. The project will
This includes conversational data collected in collaboration Nick Campbell during my visit to ATR/NICT in Kyoto, and eye-tracking data collected with the help of Prof. Seiichi Yamamoto during my stay as NICT Visiting Scholar at the Doshisha University in Kyoto. It also has links and papers associated with it. (Password protected)
DUMAS is a collaboration project with the European 5th framework programme, and I am the technical coordinator of the project. The goal of the project is to furnish electronic systems with intelligent spoken interaction capabilities, and to investigate adaptive multilingual interaction techniques to handle both spoken and text input. The project has constructed AthosMail, an e-mail application that will deal with multilingual issues in several forms and environments, and whose functionality can be adapted to different users, different situations and tasks.
a Nordic network that aims at
stimulating Nordic research in the area of on Multimodal
interfaces. It encourages joint activities on multimodal research and
application development, and organises PhD courses and workshops on
issues related to multimodal interaction. The first MUMIN workshop will
be organised in Helsinki, joined with a PhD course in Tampere in
November 2002. The 1st Nordic Symposium on Multimodal Interfaces was
held in Copenhagen in September 2003, and a workshop on multimodal
annotation in Stockholm in June 2004.
This is a continuatiation of the Interact-project (see below). The project investigates the use of ontologies to enrich the system’s interaction and reasoning capabilities.
This is a
continuation of the Interact-project too, focussing on speech
applications. The aim is to combine and coordinate speech, text and pointing
gestures in a multimodal interaction framework to provide navigation help and
travel information for users in the Helsinki area.
I was the project manager in the Interact project (2001-2003) which was a collaboration project between four Finnish universities and supported by the Finnish Technology Agency TEKES, by the leading IT-companies as well as the Arla Institute and the Finnish Association for the Deaf. The project aimed at developing models for human-computer interaction and at building a spoken dialogue system which would allow users to interact with the computer in a natural and robust way. Within the project, we developed the first Finnish dialogue system that provides information on Helsinki area bus timetables.
MUMMI is a study project at UIAH, realised in collaboration with Marjo Mäenpää and Antti Raike. We explore the Design for all -concept in designing multimodal museum and other types of cultural interfaces, and concern especially how to provide artistic content in an effective way. We collaborate with the Finnish National Gallery (Marjatta Levanto, Riikka Haapalainen). Check also Antti's widely rewarded web portal Elokuvantaju (CinemaSense) which provides net-based study material on film production, and also explores how the concepts for film making can be made accessible for those using sign language.