Hi, I'm ZHANG YING, a second-year master student belong to Okumura-Takamura Lab, Tokyo Institute of Technology.
My major is Information and Communications Engineering and now I am doing some researches on Machine Translation and Essay Scoring, if you are interested in me, you can download my resume to know more about me :).
Encoder-decoder model is a model that consists of two neural networks. One of which encodes a sequence of symbols into a fixed length vector representation, and the other decodes the representation into another sequence of symbols. Then the encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
The past years have witnessed the rapid progress of this model in Natural Language Processing tasks, such as Machine Translation, Chatbot, Text Summarization and Poetry Generation.
While using Encoder-decoder to generate text, a common problem is, the generated result usually contains redundant repeating tokens. This research tries to solve this problem.
In Jianou, China, I finished my high school life,
then I went to Chongqing to go to college.
During this period, I studied Information Management and Information System in Chongqing University, China.
After I got my bachelor degree, I went to Japan and started to pursue my master degree in Natural Language Processing area.
4259 Nagatsutachō, Midori-ku,
+0 703 984 2217