TheBloke/qCammel-70-x-GGUF

qCammel 70 - GGUF

Description

This repo contains GGUF format model files for augtoma’s qCammel 70.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.

Here are a list of clients and libraries that are known to support GGUF:

  • llama.cpp.
  • text-generation-webui, the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
  • KoboldCpp, now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
  • LM Studio, version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
  • LoLLMS Web UI, should now work, choose the c_transformers backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
  • ctransformers, now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
  • llama-cpp-python, supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • candle, added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.

Repositories available

Prompt template: Vicuna

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:

Compatibility

These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9

They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.

Explanation of quantisation methods

Click to see details

Provided files

Name Quant method Bits Size Max RAM required Use case
qcammel-70-x.Q6_K.gguf-split-b Q6_K 6 19.89 GB 22.39 GB very large, extremely low quality loss
qcammel-70-x.Q2_K.gguf Q2_K 2 29.28 GB 31.78 GB smallest, significant quality loss - not recommended for most purposes
qcammel-70-x.Q3_K_S.gguf Q3_K_S 3 29.92 GB 32.42 GB very small, high quality loss
qcammel-70-x.Q3_K_M.gguf Q3_K_M 3 33.19 GB 35.69 GB very small, high quality loss
qcammel-70-x.Q3_K_L.gguf Q3_K_L 3 36.15 GB 38.65 GB small, substantial quality loss
qcammel-70-x.Q8_0.gguf-split-b Q8_0 8 36.59 GB 39.09 GB very large, extremely low quality loss - not recommended
qcammel-70-x.Q6_K.gguf-split-a Q6_K 6 36.70 GB 39.20 GB very large, extremely low quality loss
qcammel-70-x.Q8_0.gguf-split-a Q8_0 8 36.70 GB 39.20 GB very large, extremely low quality loss - not recommended
qcammel-70-x.Q4_0.gguf Q4_0 4 38.87 GB 41.37 GB legacy; small, very high quality loss - prefer using Q3_K_M
qcammel-70-x.Q4_K_S.gguf Q4_K_S 4 39.07 GB 41.57 GB small, greater quality loss
qcammel-70-x.Q4_K_M.gguf Q4_K_M 4 41.42 GB 43.92 GB medium, balanced quality - recommended
qcammel-70-x.Q5_0.gguf Q5_0 5 47.46 GB 49.96 GB legacy; medium, balanced quality - prefer using Q4_K_M
qcammel-70-x.Q5_K_S.gguf Q5_K_S 5 47.46 GB 49.96 GB large, low quality loss - recommended
qcammel-70-x.Q5_K_M.gguf Q5_K_M 5 48.75 GB 51.25 GB large, very low quality loss - recommended
qcammel-70-x.Q6_K.gguf Q6_K 6 56.59 GB 59.09 GB very large, extremely low quality loss
qcammel-70-x.Q8_0.gguf Q8_0 8 73.29 GB 75.79 GB very large, extremely low quality loss - not recommended

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

Q6_K and Q8_0 files are split and require joining

Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.

Click for instructions regarding Q6_K and Q8_0 files

  • ``

  • ``

  • ``

  • ``



Example llama.cpp command

Make sure you are using llama.cpp from commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9 or later.

For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven’t yet updated for GGUF, please use GGML files instead.

./main -t 10 -ngl 32 -m qcammel-70-x.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a story about llamas ASSISTANT:"

Change -t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8. If offloading all layers to GPU, set -t 1.

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don’t have GPU acceleration.

Change -c 4096 to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp.md.

How to run from Python code

You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries.

How to load this model from Python using ctransformers

First install the package

# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers

Simple example code to load one of these GGUF models

from ctransformers import AutoModelForCausalLM

# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/qCammel-70-x-GGUF", model_file="qcammel-70-x.q4_K_M.gguf", model_type="llama", gpu_layers=50)

print(llm("AI is going to"))

How to use with LangChain

Here’s guides on using llama-cpp-python or ctransformers with LangChain:

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI’s Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I’ve had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you’re able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap’n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: augtoma’s qCammel 70

qCammel-70

qCammel-70 is a fine-tuned version of Llama-2 70B model, trained on a distilled dataset of 15,000 instructions using QLoRA. This model is optimized for academic medical knowledge and instruction-following capabilities.

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept their License before downloading this model .

The fine-tuning process applied to qCammel-70 involves a distilled dataset of 15,000 instructions and is trained with QLoRA,

Variations The original Llama 2 has parameter sizes of 7B, 13B, and 70B. This is the fine-tuned version of the 70B model.

Input Models input text only.

Output Models generate text only.

Model Architecture qCammel-70 is based on the Llama 2 architecture, an auto-regressive language model that uses a decoder only transformer architecture.

License A custom commercial license is available at: Meta AI - Error Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved

Research Papers

1 Like