Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Original model: TareksLab/L3.3-TRP-BASE-80-70B
Tarek's Role Playing Base. Uncensored
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SCE merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.
Models Merged
The following models were included in the merge:
- SicariusSicariiStuff/Negative_LLAMA_70B
- Sao10K/L3-70B-Euryale-v2.1
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- model: Sao10K/L3-70B-Euryale-v2.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
select_topk: 0.80
dtype: float32
out_dtype: bfloat16
tokenizer:
source: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for PedroPareja/L3.3-TRP-BASE-80-70B-4.5bpw-h6-exl2
Base model
TareksGraveyard/L3.3-TRP-BASE-80-70B