Text summarization has been a research topic since the 1950s, however, it became more active in the second half of the 1990s. In the year of 2000, it is even more active. The aim of text summarization research can be said to obtain a good summary (or good summaries), but it has been thought difficult to evaluate summaries which are the outputs of text summarization systems, and we do not have a definite standard measures to evaluate such systems.
As we see more research on automatic text summarization, we also see the needs to discuss and clarify the issues on how to evaluate text summarization systems. Looking at evaluation efforts in overseas, SUMMAC was conducted in the United States in May 1998 as part of the DARPA Tipster (Phase III) Text Program. Today the TIDES program is going on, and it seems that the evaluation of text summarization will be continued.
In Japan, there has been a lot of research on automatic text summarization. However, since the evaluation of such systems was done individually with their own evaluation measures at universities and industrial research organizations, and there has been little discussions about the evaluation measures and methods it is difficult to compare text summarization systems. In addition, we do not have enough language resources such as human-prepared summaries.
Thus, we have chosen Automatic Text Summarization as an NTCIR-2 task in order for the researchers in the field to collect and share text data for summarization, and to make clear the issues of evaluation measures and methods for summarization of Japanese texts.
Co-chairs of the Text Summarization Task (alphabetical order):
Contact Information
Takahiro Fukusima (Otemon Gakuin University) fukusima@res.otemon.ac.jp
Tsutomu Hirao (NTT) hirao@cslab.kecl.ntt.co.jp
Hidetsugu Nanba (Hiroshima City University) nanba@its.hiroshima-cu.ac.jp
Manabu Okumura (Tokyo Institute of Technology) oku@pi.titech.ac.jp
complain, advice to
tsc-adm@lr.pi.titech.ac.jp