Buckets:

download
raw
6.08 kB
<meta charset="utf-8" /><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;bitsandbytes&quot;,&quot;local&quot;:&quot;bitsandbytes&quot;,&quot;sections&quot;:[],&quot;depth&quot;:1}">
<link href="/docs/bitsandbytes/pr_1137/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/entry/start.435ebb3a.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/scheduler.852ec091.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/singletons.c3371655.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/index.268e315a.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/paths.5769d97d.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/entry/app.6b2aadb8.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/index.28275fd3.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/nodes/0.fece9a53.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/each.e59479a4.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/nodes/9.9cbad9fa.js">
<link rel="modulepreload" href="/docs/bitsandbytes/pr_1137/en/_app/immutable/chunks/EditOnGithub.582011f0.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;bitsandbytes&quot;,&quot;local&quot;:&quot;bitsandbytes&quot;,&quot;sections&quot;:[],&quot;depth&quot;:1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="bitsandbytes" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#bitsandbytes"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>bitsandbytes</span></h1> <p data-svelte-h="svelte-18pq57t">bitsandbytes enables accessible large language models via k-bit quantization for PyTorch. bitsandbytes provides three main features for dramatically reducing memory consumption for inference and training:</p> <ul data-svelte-h="svelte-1tzxzzf"><li>8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.</li> <li>LLM.Int() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.</li> <li>QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don’t compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.</li></ul> <h1 class="relative group"><a id="license" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#license"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>License</span></h1> <p data-svelte-h="svelte-zzmy3r">bitsandbytes is MIT licensed.</p> <p data-svelte-h="svelte-18bjayv">We thank Fabio Cannizzo for his work on <a href="https://github.com/fabiocannizzo/FastBinarySearch" rel="nofollow">FastBinarySearch</a> which we use for CPU quantization.</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/docs/source/index.mdx" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p>
<script>
{
__sveltekit_1amjjgw = {
assets: "/docs/bitsandbytes/pr_1137/en",
base: "/docs/bitsandbytes/pr_1137/en",
env: {}
};
const element = document.currentScript.parentElement;
const data = [null,null];
Promise.all([
import("/docs/bitsandbytes/pr_1137/en/_app/immutable/entry/start.435ebb3a.js"),
import("/docs/bitsandbytes/pr_1137/en/_app/immutable/entry/app.6b2aadb8.js")
]).then(([kit, app]) => {
kit.start(app, element, {
node_ids: [0, 9],
data,
form: null,
error: null
});
});
}
</script>

Xet Storage Details

Size:
6.08 kB
·
Xet hash:
20310d7d805febf256432272afd3b11b7cb78c6d23b84c379764df78752924a1

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.