Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"bitsandbytes","local":"bitsandbytes","sections":[],"depth":1}"> | |
| <link href="/docs/bitsandbytes/pr_1521/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/start.64267764.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/scheduler.852ec091.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/singletons.4904c6bc.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/index.268e315a.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/paths.7199c306.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/app.42c0de65.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/index.28275fd3.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/nodes/0.2c7162e4.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/nodes/9.29aa168c.js"> | |
| <link rel="modulepreload" href="/docs/bitsandbytes/pr_1521/en/_app/immutable/chunks/EditOnGithub.582011f0.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"bitsandbytes","local":"bitsandbytes","sections":[],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="bitsandbytes" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#bitsandbytes"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>bitsandbytes</span></h1> <p data-svelte-h="svelte-18pq57t">bitsandbytes enables accessible large language models via k-bit quantization for PyTorch. bitsandbytes provides three main features for dramatically reducing memory consumption for inference and training:</p> <ul data-svelte-h="svelte-2c8gj9"><li>8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.</li> <li>LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.</li> <li>QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don’t compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.</li></ul> <h1 class="relative group"><a id="license" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#license"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>License</span></h1> <p data-svelte-h="svelte-zzmy3r">bitsandbytes is MIT licensed.</p> <p data-svelte-h="svelte-18bjayv">We thank Fabio Cannizzo for his work on <a href="https://github.com/fabiocannizzo/FastBinarySearch" rel="nofollow">FastBinarySearch</a> which we use for CPU quantization.</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/docs/source/index.mdx" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_1s46fht = { | |
| assets: "/docs/bitsandbytes/pr_1521/en", | |
| base: "/docs/bitsandbytes/pr_1521/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/start.64267764.js"), | |
| import("/docs/bitsandbytes/pr_1521/en/_app/immutable/entry/app.42c0de65.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 9], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 6.08 kB
- Xet hash:
- 2422938536591b6951b84eed52a7fe3bf3ec488f284df6d4aef1123e9745a7f9
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.