Instructions to use codefuse-ai/CodeFuse-DevOps-Model-7B-Chat with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use codefuse-ai/CodeFuse-DevOps-Model-7B-Chat with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="codefuse-ai/CodeFuse-DevOps-Model-7B-Chat", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("codefuse-ai/CodeFuse-DevOps-Model-7B-Chat", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "auto_map": { | |
| "AutoTokenizer": [ | |
| "tokenization_qwen.QWenTokenizer", | |
| null | |
| ] | |
| }, | |
| "clean_up_tokenization_spaces": true, | |
| "model_max_length": 8192, | |
| "padding_side": "left", | |
| "tokenizer_class": "QWenTokenizer" | |
| } | |