Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- google/gemma-2-9b-it
|
4 |
+
library_name: transformers
|
5 |
+
---
|
6 |
+
|
7 |
+
# MISHANM/google-gemma-2-9b-it.gguf
|
8 |
+
|
9 |
+
This model is a GGUF version of the Google gemma-2-9b-it model, optimized for use with the `llama.cpp` framework. It is designed to run efficiently on CPUs and can be used for various natural language processing tasks.
|
10 |
+
|
11 |
+
## Model Details
|
12 |
+
1. Language: English
|
13 |
+
2. Tasks: Text generation
|
14 |
+
3. Base Model: google/gemma-2-9b-it
|
15 |
+
|
16 |
+
## Building and Running the Model
|
17 |
+
|
18 |
+
To build and run the model using `llama.cpp`, follow these steps:
|
19 |
+
|
20 |
+
### Build llama.cpp Locally
|
21 |
+
|
22 |
+
```bash
|
23 |
+
git clone https://github.com/ggerganov/llama.cpp
|
24 |
+
cd llama.cpp
|
25 |
+
cmake -B build
|
26 |
+
cmake --build build --config Release
|
27 |
+
|
28 |
+
```
|
29 |
+
## Run the Model
|
30 |
+
|
31 |
+
Navigate to the build directory and run the model with a prompt:
|
32 |
+
|
33 |
+
```
|
34 |
+
cd llama.cpp/build/bin
|
35 |
+
```
|
36 |
+
## Inference with llama.cpp
|
37 |
+
|
38 |
+
```
|
39 |
+
./llama-cli -m /path/to/model/ -p "Your prompt here" -n 128
|
40 |
+
```
|
41 |
+
|
42 |
+
## Citation Information
|
43 |
+
```
|
44 |
+
@misc{MISHANM/google-gemma-2-9b-it.gguf,
|
45 |
+
author = {Mishan Maurya},
|
46 |
+
title = {Introducing Google gemma-2-9b-it GGUF Model},
|
47 |
+
year = {2025},
|
48 |
+
publisher = {Hugging Face},
|
49 |
+
journal = {Hugging Face repository},
|
50 |
+
|
51 |
+
}
|
52 |
+
```
|