Picture for Yejin Bang

Yejin Bang

The Pyramid of Captions

Add code
May 01, 2024
Figure 1 for The Pyramid of Captions
Figure 2 for The Pyramid of Captions
Figure 3 for The Pyramid of Captions
Figure 4 for The Pyramid of Captions
Viaarxiv icon

High-Dimension Human Value Representation in Large Language Models

Add code
Apr 11, 2024
Figure 1 for High-Dimension Human Value Representation in Large Language Models
Figure 2 for High-Dimension Human Value Representation in Large Language Models
Figure 3 for High-Dimension Human Value Representation in Large Language Models
Figure 4 for High-Dimension Human Value Representation in Large Language Models
Viaarxiv icon

Measuring Political Bias in Large Language Models: What Is Said and How It Is Said

Add code
Mar 27, 2024
Figure 1 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Figure 2 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Figure 3 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Figure 4 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Viaarxiv icon

Mitigating Framing Bias with Polarity Minimization Loss

Add code
Nov 03, 2023
Viaarxiv icon

Survey of Social Bias in Vision-Language Models

Add code
Sep 24, 2023
Viaarxiv icon

Learn What NOT to Learn: Towards Generative Safety in Chatbots

Add code
Apr 25, 2023
Figure 1 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 2 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 3 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 4 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Viaarxiv icon

A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity

Add code
Feb 28, 2023
Figure 1 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 2 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 3 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 4 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Viaarxiv icon

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Add code
Nov 10, 2022
Figure 1 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 2 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 3 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 4 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Viaarxiv icon

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Add code
Oct 14, 2022
Figure 1 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 2 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 3 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 4 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Viaarxiv icon

AiSocrates: Towards Answering Ethical Quandary Questions

Add code
May 24, 2022
Figure 1 for AiSocrates: Towards Answering Ethical Quandary Questions
Figure 2 for AiSocrates: Towards Answering Ethical Quandary Questions
Figure 3 for AiSocrates: Towards Answering Ethical Quandary Questions
Figure 4 for AiSocrates: Towards Answering Ethical Quandary Questions
Viaarxiv icon