Picture for Olatunji Ruwase

Olatunji Ruwase

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

Add code
Apr 23, 2024
Viaarxiv icon

Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding

Add code
Mar 05, 2024
Figure 1 for Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding
Figure 2 for Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding
Figure 3 for Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding
Figure 4 for Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding
Viaarxiv icon

FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design

Add code
Jan 25, 2024
Viaarxiv icon

ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks

Add code
Dec 18, 2023
Figure 1 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Figure 2 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Figure 3 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Figure 4 for ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Viaarxiv icon

DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention

Add code
Sep 29, 2023
Viaarxiv icon

DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales

Add code
Aug 02, 2023
Figure 1 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Figure 2 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Figure 3 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Figure 4 for DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Viaarxiv icon

ZeRO++: Extremely Efficient Collective Communication for Giant Model Training

Add code
Jun 16, 2023
Figure 1 for ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Figure 2 for ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Figure 3 for ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Figure 4 for ZeRO++: Extremely Efficient Collective Communication for Giant Model Training
Viaarxiv icon

A Novel Tensor-Expert Hybrid Parallelism Approach to Scale Mixture-of-Experts Training

Add code
Mar 11, 2023
Figure 1 for A Novel Tensor-Expert Hybrid Parallelism Approach to Scale Mixture-of-Experts Training
Figure 2 for A Novel Tensor-Expert Hybrid Parallelism Approach to Scale Mixture-of-Experts Training
Figure 3 for A Novel Tensor-Expert Hybrid Parallelism Approach to Scale Mixture-of-Experts Training
Figure 4 for A Novel Tensor-Expert Hybrid Parallelism Approach to Scale Mixture-of-Experts Training
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale

Add code
Jun 30, 2022
Figure 1 for DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
Figure 2 for DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
Figure 3 for DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
Figure 4 for DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
Viaarxiv icon