How to use from
vLLMUse Docker
docker model run hf.co/LargeWorldModel/LWM-Text-Chat-512KQuick Links
LWM-Text-Chat-512K Model Card
Model details
Model type: LWM-Text-Chat-512K is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.
Model date: LWM-Text-Chat-512K was trained in December 2023.
Paper or resources for more information: https://largeworldmodel.github.io/
License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/LargeWorldModel/lwm/issues
Training dataset
- 3500 subset of Books3 documents with 500K to 1M tokens
- Downloads last month
- 11
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "LargeWorldModel/LWM-Text-Chat-512K"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LargeWorldModel/LWM-Text-Chat-512K", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'