Ying Zhang's homepage

About Ying Zhang


I am currently a postdoc in the Institute of Innovative Research, Tokyo Institute of Technology, under the supervision of Prof. Manabu Okumura at Okumura-Funakoshi's Lab.

Thank you for your interest in my profile! I am working in natural language processing, to track my newest publications, you can also visit my ORCID, Semantic Scholar, or Google Scholar.

Newest Publications

  1. Ying Zhang*, Hidetaka Kamigaito, and Manabu Okumura. “Bidirectional Transformer Reranker for Grammatical Error Correction,” Journal of Natural Language Processing (JNLP 2024), Volume 31, Issue 1. [paper]

  2. Ying Zhang*, Hidetaka Kamigaito, and Manabu Okumura. “Bidirectional Transformer Reranker for Grammatical Error Correction,” In Findings of the Association for Computational Linguistics: ACL 2023, pp. 3801-3825, long paper. [paper, bib, presentation, poster, code]

  3. Ying Zhang*, Hidetaka Kamigaito, Tatsuya Aoki, Hiroya Takamura, and Manabu Okumura. “Generic Mechanism for Reducing Repetitions in Encoder-Decoder Models,” Journal of Natural Language Processing (JNLP 2023), Volume 30, Issue 2, pp. 401-431. [paper, bib, code]

  4. Ying Zhang*, Hidetaka Kamigaito, and Manabu Okumura. “A Language Model-based Generative Classifier for Sentence-level Discourse Parsing,” In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), pp. 2432-2446, long paper. [paper, bib, presentation, poster, code]

  5. Ying Zhang*, Hidetaka Kamigaito, Tatsuya Aoki, Hiroya Takamura, and Manabu Okumura. “Generic Mechanism for Reducing Repetitions in Encoder-Decoder Models,” In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP 2021), pp. 1606-1615. [paper, bib, presentation, code]

Research Interests


My research focuses on natural language processing that utilizes computers to understand the text as human beings. Specifically, I am passionate about developing general functions and mechanisms that can be easily applied to pretrained large language models across various natural language generation tasks to enhance results. My Ph.D. dissertation is Addressing Text Degeneration of Discriminative Models with Re-ranking Methods.

For instance, I am particularly interested in addressing the issue of text degeneration. This involves training deep neural networks to assign higher probabilities for natural and grammatical texts compared to bland and repetitive texts when using likelihood as a decoding objective (Holtzman et al. 2020).

I am currently researching representation degeneration to enhance the expressiveness of token embeddings by reducing their overall similarity during training (Gao et al. 2019).

Academic/Career


Thank you for your interest in my work!

If you have any question about our paper or code, please feel free to contact with me via

如果你想了解奥村-船越研究室的招生情况,请直接查看我们的实验室主页、联系两位老师、或参加我们的见学会。

我不会对任何咨询研究室的邮件进行回复。