saassa williamberman commited on
Commit
2d275d1
·
0 Parent(s):

Duplicate from diffusers/controlnet-canny-sdxl-1.0

Browse files

Co-authored-by: Will Berman <williamberman@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ out_bird.png filter=lfs diff=lfs merge=lfs -text
37
+ out_couple.png filter=lfs diff=lfs merge=lfs -text
38
+ out_room.png filter=lfs diff=lfs merge=lfs -text
39
+ out_tornado.png filter=lfs diff=lfs merge=lfs -text
40
+ out_women.png filter=lfs diff=lfs merge=lfs -text
41
+ out_hug_lab_7.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail++
3
+ base_model: runwayml/stable-diffusion-v1-5
4
+ tags:
5
+ - stable-diffusion-xl
6
+ - stable-diffusion-xl-diffusers
7
+ - text-to-image
8
+ - diffusers
9
+ inference: false
10
+ duplicated_from: diffusers/controlnet-canny-sdxl-1.0
11
+ ---
12
+
13
+ # SDXL-controlnet: Canny
14
+
15
+ These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with canny conditioning. You can find some example images in the following.
16
+
17
+ prompt: a couple watching a romantic sunset, 4k photo
18
+ ![images_0)](./out_couple.png)
19
+
20
+ prompt: ultrarealistic shot of a furry blue bird
21
+ ![images_1)](./out_bird.png)
22
+
23
+ prompt: a woman, close up, detailed, beautiful, street photography, photorealistic, detailed, Kodak ektar 100, natural, candid shot
24
+ ![images_2)](./out_women.png)
25
+
26
+ prompt: Cinematic, neoclassical table in the living room, cinematic, contour, lighting, highly detailed, winter, golden hour
27
+ ![images_3)](./out_room.png)
28
+
29
+ prompt: a tornado hitting grass field, 1980's film grain. overcast, muted colors.
30
+ ![images_0)](./out_tornado.png)
31
+
32
+ ## Usage
33
+
34
+ Make sure to first install the libraries:
35
+
36
+ ```bash
37
+ pip install accelerate transformers safetensors opencv-python diffusers
38
+ ```
39
+
40
+ And then we're ready to go:
41
+
42
+ ```python
43
+ from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
44
+ from diffusers.utils import load_image
45
+ from PIL import Image
46
+ import torch
47
+ import numpy as np
48
+ import cv2
49
+
50
+ prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
51
+ negative_prompt = 'low quality, bad quality, sketches'
52
+
53
+ image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
54
+
55
+ controlnet_conditioning_scale = 0.5 # recommended for good generalization
56
+
57
+ controlnet = ControlNetModel.from_pretrained(
58
+ "diffusers/controlnet-canny-sdxl-1.0",
59
+ torch_dtype=torch.float16
60
+ )
61
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
62
+ pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
63
+ "stabilityai/stable-diffusion-xl-base-1.0",
64
+ controlnet=controlnet,
65
+ vae=vae,
66
+ torch_dtype=torch.float16,
67
+ )
68
+ pipe.enable_model_cpu_offload()
69
+
70
+ image = np.array(image)
71
+ image = cv2.Canny(image, 100, 200)
72
+ image = image[:, :, None]
73
+ image = np.concatenate([image, image, image], axis=2)
74
+ image = Image.fromarray(image)
75
+
76
+ images = pipe(
77
+ prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale,
78
+ ).images
79
+
80
+ images[0].save(f"hug_lab.png")
81
+ ```
82
+
83
+ ![images_10)](./out_hug_lab_7.png)
84
+
85
+ To more details, check out the official documentation of [`StableDiffusionXLControlNetPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/controlnet_sdxl).
86
+
87
+ ### Training
88
+
89
+ Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sdxl.md).
90
+
91
+ #### Training data
92
+ This checkpoint was first trained for 20,000 steps on laion 6a resized to a max minimum dimension of 384.
93
+ It was then further trained for 20,000 steps on laion 6a resized to a max minimum dimension of 1024 and
94
+ then filtered to contain only minimum 1024 images. We found the further high resolution finetuning was
95
+ necessary for image quality.
96
+
97
+ #### Compute
98
+ one 8xA100 machine
99
+
100
+ #### Batch size
101
+ Data parallel with a single gpu batch size of 8 for a total batch size of 64.
102
+
103
+ #### Hyper Parameters
104
+ Constant learning rate of 1e-4 scaled by batch size for total learning rate of 64e-4
105
+
106
+ #### Mixed precision
107
+ fp16
config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "ControlNetModel",
3
+ "_diffusers_version": "0.20.0.dev0",
4
+ "_name_or_path": "../controlnet-1-0-canny/checkpoint-20000/controlnet",
5
+ "act_fn": "silu",
6
+ "addition_embed_type": "text_time",
7
+ "addition_embed_type_num_heads": 64,
8
+ "addition_time_embed_dim": 256,
9
+ "attention_head_dim": [
10
+ 5,
11
+ 10,
12
+ 20
13
+ ],
14
+ "block_out_channels": [
15
+ 320,
16
+ 640,
17
+ 1280
18
+ ],
19
+ "class_embed_type": null,
20
+ "conditioning_channels": 3,
21
+ "conditioning_embedding_out_channels": [
22
+ 16,
23
+ 32,
24
+ 96,
25
+ 256
26
+ ],
27
+ "controlnet_conditioning_channel_order": "rgb",
28
+ "cross_attention_dim": 2048,
29
+ "down_block_types": [
30
+ "DownBlock2D",
31
+ "CrossAttnDownBlock2D",
32
+ "CrossAttnDownBlock2D"
33
+ ],
34
+ "downsample_padding": 1,
35
+ "encoder_hid_dim": null,
36
+ "encoder_hid_dim_type": null,
37
+ "flip_sin_to_cos": true,
38
+ "freq_shift": 0,
39
+ "global_pool_conditions": false,
40
+ "in_channels": 4,
41
+ "layers_per_block": 2,
42
+ "mid_block_scale_factor": 1,
43
+ "norm_eps": 1e-05,
44
+ "norm_num_groups": 32,
45
+ "num_attention_heads": null,
46
+ "num_class_embeds": null,
47
+ "only_cross_attention": false,
48
+ "projection_class_embeddings_input_dim": 2816,
49
+ "resnet_time_scale_shift": "default",
50
+ "transformer_layers_per_block": [
51
+ 1,
52
+ 2,
53
+ 10
54
+ ],
55
+ "upcast_attention": null,
56
+ "use_linear_projection": true
57
+ }
diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:982e12f72cb41031c060de08a5d78ff6912e9b02c6ad91fc480f05a72cad10cb
3
+ size 5004438321
diffusion_pytorch_model.fp16.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a42da57d6e2fd6ec786ccfea1cf1a06d2c1d91b2d8a14c7de3a67553b10b2948
3
+ size 2502401039
diffusion_pytorch_model.fp16.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2e7d3921058a442cc80430d1ec8847f42599c705e2451c95e77cf4dcf8d6c25
3
+ size 2502139136
diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea99040544a999f814fd854575a3aee069a005d026864c8d321b82576706a221
3
+ size 5004167864
out_bird.png ADDED

Git LFS Details

  • SHA256: 596aecc5291798f1f25b665a92110627f7661d13d4eef3ae038b175de9db93c2
  • Pointer size: 132 Bytes
  • Size of remote file: 6.94 MB
out_couple.png ADDED

Git LFS Details

  • SHA256: 3613a6c3119ccc39d687ed917c8b364f05c17e16c1d9cf0c36816ef07da80868
  • Pointer size: 132 Bytes
  • Size of remote file: 7.29 MB
out_hug_lab_7.png ADDED

Git LFS Details

  • SHA256: 33d7a1f77d34f565df9910bf8a3276817cb21fa7f4025174f93f1f3517b2a4f1
  • Pointer size: 132 Bytes
  • Size of remote file: 1.97 MB
out_room.png ADDED

Git LFS Details

  • SHA256: fce39ad0ca4c081aecd90a2c16118bc38d89ac27c7ca25d59a80d470b1c4ed31
  • Pointer size: 132 Bytes
  • Size of remote file: 6.42 MB
out_tornado.png ADDED

Git LFS Details

  • SHA256: b2ecd942a61e5ee4f8b5d1c8d108232a0e3a012036eaa4b8865ebfd0b7e15346
  • Pointer size: 132 Bytes
  • Size of remote file: 8.51 MB
out_women.png ADDED

Git LFS Details

  • SHA256: ada3827272bdd1599372d5d8842a173e054bc1164097c4e347c4bcd7cfcd3c7c
  • Pointer size: 132 Bytes
  • Size of remote file: 7.59 MB