c01dsnap commited on
Commit
ec59e30
·
1 Parent(s): 3f3635f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
- # LLM Security Evalustion
5
  This repo contains scripts for evaluating LLM security abilities. We gathered hundreds of questions cover different ascepts of security, such as vulnerablities, pentest, threat intelligence, etc.
6
 
 
 
7
  ## Suppoted LLM
8
  * ChatGLM
9
  * Baichuan
 
10
 
11
  ## Usage
12
  Because of different LLM requires for different running environment, we highly recommended to manage your virtual envs via Miniconda.
@@ -14,6 +17,9 @@ Because of different LLM requires for different running environment, we highly r
14
  1.Install dependencies
15
  ```bash
16
  pip install -r requirements.txt
 
 
 
17
  ```
18
  2.Clone this repo
19
  ```bash
@@ -25,3 +31,7 @@ cd LLM-Sec-Evaluation
25
  # You might need to modify the script running interpreter in evaluate.py
26
  bash evaluate.sh
27
  ```
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+ # LLM Security Evaluation
5
  This repo contains scripts for evaluating LLM security abilities. We gathered hundreds of questions cover different ascepts of security, such as vulnerablities, pentest, threat intelligence, etc.
6
 
7
+ All the questions can be viewed at [https://huggingface.co/datasets/c01dsnap/LLM-Sec-Evaluation](https://huggingface.co/datasets/c01dsnap/LLM-Sec-Evaluation).
8
+
9
  ## Suppoted LLM
10
  * ChatGLM
11
  * Baichuan
12
+ * Vicuna ([GGML format](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML))
13
 
14
  ## Usage
15
  Because of different LLM requires for different running environment, we highly recommended to manage your virtual envs via Miniconda.
 
17
  1.Install dependencies
18
  ```bash
19
  pip install -r requirements.txt
20
+
21
+ # If you want to use GPU, please install llama-cpp-python with the following command
22
+ LLAMA_CUBLAS=1 CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir --force-reinstall --verbose
23
  ```
24
  2.Clone this repo
25
  ```bash
 
31
  # You might need to modify the script running interpreter in evaluate.py
32
  bash evaluate.sh
33
  ```
34
+
35
+ ## Changelog
36
+ - 2023.7.13 - Add support for ChatGLM & Baichuan
37
+ - 2023.7.17 - Add support for Vicuna