tel. +39.050.315-3012(off) -2043(lab)
|
Leonello Tarabella began his musical training as jazz alto-sax player.
He earned his degree in Computer Science at the University of Pisa and
started his work on computer music under the direction of M° Pietro
Grossi.
As a researcher he coordinates the activities of the cART Lab (computer art laboratory) of CNUCE/C.N.R., Pisa and yearly teaches a computer music course at the Computer Science Dept. of Pisa University. His research mainly concerns the design of languagues for algorithmic composition and original gestural man/machine interfaces for interactive computer music/graphics performaces. As a musician he composes and performs his own computer music with the systems realized the performed in Madrid, The Netherlands, Shanghai, Thessaloniki, NewYork, La Habana and on the italian national TV networks.
|
|
The CNUCE Institute was founded in 1965 as Computing Center for the
University of Pisa; later it was attached to the National Council of Research
(C.N.R.) of Italy and at the moment it has a staff of about 80 active in
the laboratories of logic programming, computer network, satellite flight
control, data bases, parallel computing, structural engineering, remote
sensing, computer graphics and computer music. At the beginning of 2000
the CNUCE institute moved to the new "Area della Ricerca del
C.N.R., Pisa".
The activities at the cART Lab mainly consist of Applied Research,
Education, Production. The keywords characterising the activity of the
Lab are man-machine interaction and real-time gesture control.
Infrared beams and video captured image processing technologies have been
taken into consideration in order to design and carry out original gesture
interfaces for remote sensing (i.e. without mechanical and/or electrical
links) moving objects handled by performers or direct gesture of the human
body. Many interactive multimedia operas have been produced and presented
in various important events at national and international levels.
Gesture, that is movement of the hands and head and posture of the
mouth and eyes, plays a very important role in human communication: it
can be seen as a parallel language for enriching the semantic content of
speech or as an alternative way to communicate basic concepts and information
between people of different cultures and mother tongues.
The milestones of tools and paradigms in man-computer communication
have been punched-cards and character printers, keyboards and character
screens and finally mouse and graphic screens. Due to the daily increase
of the power of computer and electronics systems able to sense presence,
distance, position, temperature (and so on...) of objects, a new field
of investigation and implementation as been started in the last few years:
the recognition of human gesture.
In this direction, one the most promising areas of application
is computer vision this is due to two main causes: first, vision is for
people the natural way of recognising the gestures of other humans; second,
the required hardware is simple, economic and standard enough, i.e. ordinary
video cameras and digitizer cards. Whatever the technology used, the whole
problem consists of two main steps and field of research. Such as in speech
recognition where the main steps are from acoustic-signal to words and
from sequences-of-words to semantics, in gesture recognition the steps
mainly are: - recognition of figures (hands, face) in terms of shape and
position in space; - dynamics, i.e. changing of shapes in time, trajectories,
detection of starting and ending points of trajectories; - semantics of
gesture.
In contemporary electronic music, the synchronization between musicians
and computers, and the possibility of affecting with gesture the synthesized
music in order to regain human feeling and sensitivity during the live
performance, is greatly taken into consideration. Besides the MIDI controllers
such as keyboard, drum-pads, pitch-to-midi converters.. which issue messages
for controlling digital sound machines, and devices such as the Dataglove
used in Virtual Reality applications, many researchers active in the field
of computer music, realized special devices and systems able to detect
as much as possible information from the movements of human body .
At the cART Lab of CNUCE, Leonello Tarabella focused his attention
for designing and developing original new man-machine interfaces taking
into consideration sensors and technologies typically used in robotics:
infra-red
beams and
real
time analysis of video captured images. The basic idea consists of
remote sensing, (i.e. without mechanical and/or electrical links) moving
objects handled by performers and/or remote sensing gesture of the human
body. Some original gesture recognition devices and systems are here described.
The TwinTowers device, based on the
infrared technology, detects height and angular positions of the hands.
After some previuos experiences, a system for recognizing shape,
position and rotation of the hands has been developed. The performer moves
his/her hands in a video-camera capture area, the camera sends the signal
to a video digitizer card plugged into a computer and the computer processes
the mapped figures of the performerís hands and produces data concerning
x-y positions, shape (posture) and the angle of rotation of both the hands.
Data extracted from the image analysis every frame are used for controlling
real-time interactive computer music and computer graphics performances.
With this system, an number of applications have been implemented. The
most important are here reported:
The Imaginary Piano where the
hands of a pianist play in the air with no real keyboard;
The PAGe (Painting by Aerial Gesture)
system where the hands of a performer paint in the air images projected
onto a large video screen.
In order to put at work the power of the algorithmic composition
and the gestural control approach for real time interactive performances,
a special language called Real-Time Concurrent PascalMusic (RTCPM) [1][2]
has
been developed in the last years. As an evolution of the RTCPM language
a new language named GALileo which includes
visual programming, algorithmic composition and signal processing facilities,
has been designed and is now under development.