Over the last few years, the organisers of the SCI World MultiConference on Systemics, Cybernetics, and Informatics have often circulated invitations to participate. But what exactly is SCI?
This highly-successful event -- a remarkable 1,859 papers were accepted for the 2001 event -- is run by the International Institute of Informatics and Systemics (or IIIS). This is "a non-profitable international Organization which takes into consideration the globalization process". It appears to have no officers or location, and is not associated with any academic institution. Its only public role appears to be to host the SCI conference.
The conference organisers widely circulate invitations to join the program committee. This year, a PhD student of mine was personally invited to join, a surprising honour for an individual with a single minor publication. Several newly-qualified junior staff received the same invitation. Visiting the web site, a substantial PC was already in place. Some of these may be important researchers in their field; however, of the several where I investigated further, only some appeared to have academic posts, and none had a significant number of publications. Yet the web site is a convincing document that appears to reflect a substantial enterprise.
The call-for-papers was circulated even more widely. It explicitly states that papers will be reviewed. As an experiment, I arranged for submission of three papers to the 2002 conference, by different authors. All three papers are superficially reasonable submissions, on the first page at least, and all three were accepted without comment. Yet the merits of these papers as research publications are far from clear:
Two existing unpublished papers of mine, combined sentence by sentence, so that the text alternates line by line between two topics. This paper doesn't make any sense, but is just possibly not worse than legitimate submissions I have refereed. It is conceivable that a reviewer might regard it as acceptable, albiet impenetrable.
An unpublished paper of mine, with additional remarks inserted in the introduction and elsewhere. These gems of research observations included:
... we have implemented a[n] ... algorithm ... the computational cost is high, and the method does not work at all. We believe that this method is not capable of being improved ...
The final fifty [rules] were culled from a much larger number originally proposed, when in the light of day we discovered that many of the suggestions were due to inebriation. An important discovery we made at this stage is that clear thinking is necessary in this research. Intoxication sustained over a long period would have ended the work prematurely.
For example, consider the parsing of "bathtub" ... Interestingly, this discovery was made in the bathtub by the second author and his girlfriend whose activities and involuntary vocalisations at that moment led to some considerable difficulty with a range of pronunciation tasks.
Many of the problems [in our code] may be due to poor implementation, as we have found many bugs in our code. As this research is not well designed and has led to such poor outcomes, we have decided not to improve the code. An alternative was to invent more promising results, an approach that we report on in the last line of the table. As this data shows, concoction of successful experiments is a highly effective method of improving apparent performance.Which program committee members thought this work worthy of acceptance?
A surreal collection of remarks about information retrieval, by myself and a colleague. Aside from the first page, many of the paragraphs make no sense, and much of the content consists of jokes and nonsequiturs. For example,
The growth of information retrieval corresponded with the popularity of Sartre and existentialism, so that answers simply were; their meaning and content was not relevant. Later, the influence of popular mysticism and Zen philosophies led to a reverse approach, in which answers were not. Anarchists insisted that answers be statements that undermine the question. Another approach has been to attack the implicit dominance of the query and ask whether the query is relevant to the answers, thus seeking equality in the query-answer relationship.
References included "The voyage of the Beagle" (Darwin), "Equations" (Maxwell), and "Users and dualism" (D. Carte). It is difficult to believe that this paper was read, let alone reviewed.
This paper had been circulating for a while, as a spoof of information retrieval, and it had previously been submitted to several venues - and rejected every time. These venues include an international workshop, reports, a SIG newsletter, and an Australian conference.
The organisers of the conference invite the contributors to pay the registration fee, with a separate fee for each accepted paper, and state it is not necessary to attend so long as the publication fee is paid -- a highly unusual practice. I have repeatedly requested the referees' reports, but there has been no response. The organisers have however rapidly responded to queries about the financial arrangements. Although this conference may well be legitimate, the reviewing process appears to be nonexistent and the statements in the call-for-papers misleading.
Exactly the same cast of characters was listed in the web pages for 2003, under exactly the same questionable arrangements (although the web pages have moved).
Update. The University of Texas at Austin is hosting a related 2004 event, CCCT. Some of the individuals have changed; the structure has not. Make your own judgements before becoming involved.