-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Calc llm gpu memory and upgrade to docusaurus v3 (#80)
- Loading branch information
Showing
5 changed files
with
3,528 additions
and
1,514 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
--- | ||
slug: calculating-gpu-memory-for-llm | ||
title: "Calculating GPU memory for LLMs" | ||
authors: | ||
- name: Sam Stoelinga | ||
title: Engineer | ||
url: https://github.com/samos123 | ||
tags: [llm, gpu, memory] | ||
--- | ||
|
||
How many GPUs do I need to be able to serve Llama 70B? In order | ||
to answer that, you need to know how much GPU memory will be required by | ||
the Large Language Model. | ||
|
||
The formula is simple: | ||
$$ | ||
M = \dfrac{(P * 4B)}{ (32 / Q)} + O | ||
$$ | ||
| Symbol | Description | | ||
| ----------- | ----------- | | ||
| M | GPU memory expressed in Gigabyte | | ||
| P | The amount of parameters in the model. E.g. a 7B model has 7 billion parameters. | | ||
| 4B | 4 bytes, expressing the bytes used for each parameter | | ||
| 32 | There are 32 bits in 4 bytes | | ||
| Q | The amount of bits that should be used for loading the model. E.g. 16 bits, 8 bits or 4 bits. | | ||
| O | Overhead of loading additional things in GPU memory. E.g. input or batches | | ||
|
||
Now let's try out some examples. | ||
|
||
### GPU memory required for serving Llama 70B | ||
Let's try it out for Llama 70B that we will load in 16 bit with 10GB overhead. | ||
The model has 70 billion parameters. | ||
$$ | ||
\dfrac{70 * 4 \mathrm{bytes}}{32 / 16} + 10\mathrm{GB} = 150\mathrm{GB} | ||
$$ | ||
That's quite a lot of memory. A single A100 80GB wouldn't be enough, although | ||
2x A100 80GB should be enough to serve the Llama 2 70B model in 16 bit mode. | ||
|
||
How to further reduce GPU memory required for Llama 2 70B? Quantization is a method to reduce the memory footprint. Quantization is able to do this by reducing the precision of the model's parameters from floating-point to lower-bit representations, such as 8-bit integers. This process significantly decreases the memory and computational requirements, enabling more efficient deployment of the model, particularly on devices with limited resources. However, it requires careful management to maintain the model's performance, as reducing precision can potentially impact the accuracy of the outputs. | ||
|
||
In general, the consensus seems to be that 8 bit quantization achieves similar performance to using 16 bit. However, 4 bit quantization could have a noticeable impact to the model performance. | ||
|
||
Let's do another example where we use 4 bit quantization of Llama 2 70B and 1GB overhead: | ||
$$ | ||
\dfrac{70 * 4 \mathrm{bytes}}{32 / 4} + 1\mathrm{GB} = 36\mathrm{GB} | ||
$$ | ||
This is something you could easily run on 2 x L4 24GB GPUs. | ||
|
||
Got more questions? Don't hesitate to join our Discord and ask away. | ||
|
||
<a href="https://discord.gg/JeXhcmjZVm"> | ||
<img alt="discord-invite" src="https://dcbadge.vercel.app/api/server/JeXhcmjZVm?style=flat" /> | ||
</a> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,15 @@ | ||
// @ts-check | ||
// Note: type annotations allow type checking and IDEs autocompletion | ||
|
||
const lightCodeTheme = require("prism-react-renderer/themes/github"); | ||
const darkCodeTheme = require("prism-react-renderer/themes/dracula"); | ||
import remarkMath from 'remark-math'; | ||
import rehypeKatex from 'rehype-katex'; | ||
|
||
import { themes } from "prism-react-renderer" | ||
const lightCodeTheme = themes.github; | ||
const darkCodeTheme = themes.dracula; | ||
|
||
// const lightCodeTheme = require("prism-react-renderer/themes/github"); | ||
// const darkCodeTheme = require("prism-react-renderer/themes/dracula"); | ||
|
||
/** @type {import('@docusaurus/types').Config} */ | ||
const config = { | ||
|
@@ -47,26 +54,38 @@ const config = { | |
/** @type {import('@docusaurus/preset-classic').Options} */ | ||
({ | ||
docs: { | ||
sidebarPath: require.resolve("./sidebars.js"), | ||
sidebarPath: "./sidebars.js", | ||
// Please change this to your repo. | ||
// Remove this to remove the "edit this page" links. | ||
editUrl: | ||
"https://github.com/substratusai/substratusai.github.io/tree/main/", | ||
}, | ||
blog: { | ||
remarkPlugins: [remarkMath], | ||
rehypePlugins: [rehypeKatex], | ||
showReadingTime: true, | ||
// Please change this to your repo. | ||
// Remove this to remove the "edit this page" links. | ||
editUrl: | ||
"https://github.com/substratusai/substratusai.github.io/tree/main/", | ||
}, | ||
theme: { | ||
customCss: require.resolve("./src/css/custom.css"), | ||
customCss: ["./src/css/custom.css"], | ||
}, | ||
}), | ||
], | ||
], | ||
|
||
stylesheets: [ | ||
{ | ||
href: 'https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css', | ||
type: 'text/css', | ||
integrity: | ||
'sha384-odtC+0UGzzFL/6PNoE8rX/SPcQDXBJ+uRepguP4QkPCm2LBxH3FA3y+fKSiJ+AmM', | ||
crossorigin: 'anonymous', | ||
}, | ||
], | ||
|
||
themeConfig: | ||
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */ | ||
({ | ||
|
@@ -150,4 +169,4 @@ const config = { | |
}), | ||
}; | ||
|
||
module.exports = config; | ||
export default config; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.