The results have been officialy presented during THE NICEPHORE DAYS 2006

Official web site : NICEPHORE DAYS 2006

NEW ImagEVAL 2006 Workshop NEW

! PROGRAM and information !

The ImagEVAL Workshop will take place after the CIVR Conference

  • Date: 12 July 2007
  • Place: Amsterdam University
  • CIVR 2007

    We invite you to participate to the CIVR Conference. Please see:

    Please, check the notes for the workshop papers [pdf] and The Copyright Form [pdf]

    Importante date (Workshop) paper submission dead line : 01 June 2007



    In 2005, the Steering Committee of ImagEVALhad the opportunity of proposing evaluation campaigns for funding by the French “Techno-Vision” program. The ImagEVAL project relates to the evaluation of technologies of image filtering, content-based image retrieval (CBIR) and automatic description of images in large-scale image databases

    The objective of ImagEVAL is double:

    • to organize important evaluations starting from concrete needs
    • to evaluate technologies held by national and foreign research laboratories, and software solutions

    In order to pursue our idea of bringing image processing evaluation closer to user needs, we first consulted various users of image retrieval and analysis. After this, we decided, for clarity, to break down complex image retrieval tasks into simpler modules with dedicated evaluation procedures [see the different tasks of ImagEVAL]

    We expect that, if the data providers (potential end-users) understand the interests of such campaigns, ImagEVAL will fulfil our hopes for a real usage-oriented evaluation

    Technological evaluation versus user-oriented evaluation

    Technological evaluation aims at establishing a hierarchy between automatic processing technologies usually based on effectiveness measured by Precision/Recall. On the other hand, user-oriented evaluation considers the systems as used by end-users. In such evaluations, one must consider aspects such as the quality of the user interface, the ease of use of interactive tools, response time, indexing time, and time needed to adapt the system to a new domain

    At the last CLEF workshop in 2005, Carol Peters posed the question of why systems having obtained very good results in the previous CLEF have not shown commercial success ? One response might be that the evaluation criteria “do not reflect the real use of the systems”

    How can we improve our methods of technological evaluation so that end-user criteria can also be mixed into the evaluation? This first edition of ImagEVAL tried to bring over some answers

    Read more here

    Constitution of the corpora

    For the image data to be used in the five tasks, we tried to build a diversified corpus covering the usage of our commercial partners. Test and development images were taken from several important images providers. A complete study about the constitution of the imagEVAL corpora and the groundtruths can be found [pdf]

    Read ourcomments about the databases and the copyright

    Constitution of the reference sets

    A partial and incomplete verification of participant answers can possibly distort true performance in the evaluation. Here, since this first campaign involves databases of reasonable sizes we decided to manually pre-tag the entire corpus by professional indexers. Two indexers tagged each photo for each task. When there was doubt, they discussed the cases to find consensus. If this could not be established, the image was removed from the corpus

    This conjoint principle:

    • (1) that ground-truth is built by end-users and
    • (2) we cannot ask a system to find a good answer if a human cannot decide,

    has been approved by all the participants


    | Site map | Home | Participants | Campaigns | Communications | Contact us |