Zheng Zhao

PhD Student

I am a fourth year PhD student with UKRI Centre for Doctoral Training in NLP at the University of Edinburgh, working with Shay Cohen and Bonnie Webber. I am affiliated with ILCC in the School of Informatics. I am also a member of the Cohort and EdinburghNLP group.

My research topic is on analyzing and interpreting neural networks for NLP. I am also interested in large language models, summarization, discourse, and their related topics.


Education
  • University of Edinburgh
    University of Edinburgh
    PhD in Natural Language Processing
    Sep. 2020 - Present
  • University of Edinburgh
    University of Edinburgh
    Masters by Research in Natural Language Processing
    Sep. 2019 - Aug. 2020
  • University of Edinburgh
    University of Edinburgh
    BEng in Artificial Intelligence and Software Engineering
    Sep. 2015 - Jul. 2019
Experience
  • Amazon Alexa AI
    Amazon Alexa AI
    Applied Scientist Intern
    Jun. 2023 - Nov. 2023
  • Goldman Sachs
    Goldman Sachs
    Summer Analyst
    Jun. 2018 - Aug. 2018
News
2024
I am attending NAACL 2024 in Mexico City to present our papers on temporal grounding and context understanding in LLMs.
Jun.
2023
I am presenting two papers on multilingual summarization and multilingual representation analysis at EMNLP 2023 in Singapore.
Dec.
I am starting my internship at Amazon Alexa AI in Cambridge, UK.
Jun.
2022
I am attending EMNLP 2022 in Abu Dhabi to present our work on understanding domain leanring in language models.
Dec.
Selected Publications  view all
Spectral Editing of Activations for Large Language Model Alignment
Spectral Editing of Activations for Large Language Model Alignment

Yifu Qiu, Zheng Zhao, Yftah Ziser, Anna Korhonen, Edoardo Ponti, Shay Cohen

arXiv 2024

We introduce Spectral Editing of Activations (SEA), a novel inference-time method to adjust large language models' internal representations, improving truthfulness and reducing bias. SEA projects input representations to align with positive examples while minimizing alignment with negatives, showing superior effectiveness, generalization, and efficiency compared to existing methods with minimal impact on other model capabilities.

Spectral Editing of Activations for Large Language Model Alignment
Spectral Editing of Activations for Large Language Model Alignment

Yifu Qiu, Zheng Zhao, Yftah Ziser, Anna Korhonen, Edoardo Ponti, Shay Cohen

arXiv 2024

We introduce Spectral Editing of Activations (SEA), a novel inference-time method to adjust large language models' internal representations, improving truthfulness and reducing bias. SEA projects input representations to align with positive examples while minimizing alignment with negatives, showing superior effectiveness, generalization, and efficiency compared to existing methods with minimal impact on other model capabilities.

Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding

Zheng Zhao, Emilio Monti, Jens Lehmann, Haytham Assem

NAACL 2024 Oral

This work introduces a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding during generation and operates at inference time without requiring further training.

Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding

Zheng Zhao, Emilio Monti, Jens Lehmann, Haytham Assem

NAACL 2024 Oral

This work introduces a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding during generation and operates at inference time without requiring further training.

A Joint Matrix Factorization Analysis of Multilingual Representations
A Joint Matrix Factorization Analysis of Multilingual Representations

Zheng Zhao, Yftah Ziser, Bonnie Webber, Shay Cohen

EMNLP (Findings) 2023

This work presents an analysis tool based on joint matrix factorization for comparing latent representations of multilingual and monolingual models, and finds the factorization outputs exhibit strong associations with performance observed across different cross-lingual tasks.

A Joint Matrix Factorization Analysis of Multilingual Representations
A Joint Matrix Factorization Analysis of Multilingual Representations

Zheng Zhao, Yftah Ziser, Bonnie Webber, Shay Cohen

EMNLP (Findings) 2023

This work presents an analysis tool based on joint matrix factorization for comparing latent representations of multilingual and monolingual models, and finds the factorization outputs exhibit strong associations with performance observed across different cross-lingual tasks.

Understanding Domain Learning in Language Models Through Subpopulation Analysis
Understanding Domain Learning in Language Models Through Subpopulation Analysis

Zheng Zhao, Yftah Ziser, Shay Cohen

BlackboxNLP 2022

We examine how different domains are represented in neural network architectures, focusing on the relationship between domains, model size, and training data. Using subpopulation analysis with SVCCA on Transformer-based language models, we compare models trained on multiple domains versus a single domain. Our findings show that increasing model capacity differently affects domain information storage in upper and lower layers, with larger models embedding domain-specific information similarly to separate smaller models.

Understanding Domain Learning in Language Models Through Subpopulation Analysis
Understanding Domain Learning in Language Models Through Subpopulation Analysis

Zheng Zhao, Yftah Ziser, Shay Cohen

BlackboxNLP 2022

We examine how different domains are represented in neural network architectures, focusing on the relationship between domains, model size, and training data. Using subpopulation analysis with SVCCA on Transformer-based language models, we compare models trained on multiple domains versus a single domain. Our findings show that increasing model capacity differently affects domain information storage in upper and lower layers, with larger models embedding domain-specific information similarly to separate smaller models.

Reducing Quantity Hallucinations in Abstractive Summarization
Reducing Quantity Hallucinations in Abstractive Summarization

Zheng Zhao, Shay Cohen, Bonnie Webber

EMNLP (Findings) 2020

Abstractive summaries often hallucinate unsupported content, but our system, Herman, mitigates this by verifying specific entities like dates and numbers, improving summary accuracy and earning higher ROUGE scores and human preference.

Reducing Quantity Hallucinations in Abstractive Summarization
Reducing Quantity Hallucinations in Abstractive Summarization

Zheng Zhao, Shay Cohen, Bonnie Webber

EMNLP (Findings) 2020

Abstractive summaries often hallucinate unsupported content, but our system, Herman, mitigates this by verifying specific entities like dates and numbers, improving summary accuracy and earning higher ROUGE scores and human preference.

All publications