Picture for Gabriel Stanovsky

Gabriel Stanovsky

A Nurse is Blue and Elephant is Rugby: Cross Domain Alignment in Large Language Models Reveal Human-like Patterns

Add code
May 23, 2024
Viaarxiv icon

Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition

Add code
Mar 01, 2024
Figure 1 for Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition
Viaarxiv icon

Leveraging Collection-Wide Similarities for Unsupervised Document Structure Extraction

Add code
Feb 21, 2024
Viaarxiv icon

K-QA: A Real-World Medical Q&A Benchmark

Add code
Jan 25, 2024
Viaarxiv icon

State of What Art? A Call for Multi-Prompt LLM Evaluation

Add code
Dec 31, 2023
Viaarxiv icon

Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation

Add code
Sep 30, 2023
Figure 1 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Figure 2 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Figure 3 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Figure 4 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Viaarxiv icon

Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias

Add code
Aug 01, 2023
Figure 1 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Figure 2 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Figure 3 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Figure 4 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Viaarxiv icon

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents

Add code
Jun 01, 2023
Figure 1 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Figure 2 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Figure 3 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Figure 4 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Viaarxiv icon

Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution

Add code
May 24, 2023
Figure 1 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Figure 2 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Figure 3 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Figure 4 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Viaarxiv icon

Schema-Driven Information Extraction from Heterogeneous Tables

Add code
May 23, 2023
Figure 1 for Schema-Driven Information Extraction from Heterogeneous Tables
Figure 2 for Schema-Driven Information Extraction from Heterogeneous Tables
Figure 3 for Schema-Driven Information Extraction from Heterogeneous Tables
Figure 4 for Schema-Driven Information Extraction from Heterogeneous Tables
Viaarxiv icon