From 33c02ee160a39aa4586625fc77f4f7d484f87101 Mon Sep 17 00:00:00 2001 From: Steven Date: Thu, 23 Feb 2023 12:03:31 -0800 Subject: [PATCH 1/5] first draft --- docs/source/en/_toctree.yml | 4 + docs/source/en/tutorials/basic_training.mdx | 408 ++++++++++++++++++++ 2 files changed, 412 insertions(+) create mode 100644 docs/source/en/tutorials/basic_training.mdx diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index cfbdac08a3fb..0651afdbfe51 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -8,6 +8,10 @@ - local: installation title: Installation title: Get started +- sections: + - local: tutorials/basic_training + title: Train a diffusion model + title: Tutorials - sections: - sections: - local: using-diffusers/loading diff --git a/docs/source/en/tutorials/basic_training.mdx b/docs/source/en/tutorials/basic_training.mdx new file mode 100644 index 000000000000..4c160c268dcf --- /dev/null +++ b/docs/source/en/tutorials/basic_training.mdx @@ -0,0 +1,408 @@ + + +[[open-in-colab]] + +# Train a diffusion model + +A popular application of diffusion models is unconditional image generation. The model generates an image that resembles the dataset it was trained on. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads), but if you can't find one you like, you can always train your own! + +This tutorial will show you how to finetune a [`UNet2DModel`] on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own ๐Ÿฆ‹ butterflies ๐Ÿฆ‹. + + + +This training tutorial is based on the [Training with ๐Ÿงจ Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models such as how they work, check out the notebook! + + + +Before you begin, make sure you have ๐Ÿค— Datasets installed to load and preprocess image datasets and ๐Ÿค— Accelerate to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics. + +```bash +!pip install diffusers[training]==0.11.1 +``` + +We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!) and enter your token when prompted: + +```py +>>> from huggingface_hub import notebook_login + +>>> notebook_login() +``` + +Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files: + +```bash +!sudo apt -qq install git-lfs +!git config --global credential.helper store +``` + +## Training configuration + +For convenience, create a `TrainingConfig` class containing all the training hyperparameters (feel free to adjust them): + +```py +>>> from dataclasses import dataclass + + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model namy locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() +``` + +## Load the dataset + +With the ๐Ÿค— Datasets library, you can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset: + +```py +>>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") +``` + + + +You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Just replace `config.dataset_name` with the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images. + + + +๐Ÿค— Datasets uses the [`~datasets.Image`] feature to automatically decode the image data and load it as a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) which we can visualize: + +```py +>>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() +``` + +
+ +
+ +The images are all different sizes though, so you'll need to preprocess them first: + +* `Resize` changes the image size to the one defined in `config.image_size`. +* `RandomHorizontalFlip` augments the dataset by randomly mirroring the images. +* `Normalize` is important to rescale the pixel values into a [-1, 1] range (which our model will expect). + +```py +>>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) +``` + +Use ๐Ÿค— Datasets' [`~Dataset.set_transform`] method to apply the preprocessing on the fly during training: + +```py +>>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) +``` + +Feel free to visualize the images again to confirm that they've been resized. Now you're ready to wrap the dataset in a [DataLoader](https://pytorch.org/docs/stable/data#torch.utils.data.DataLoader) for training! + +```py +>>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) +``` + +## Create a UNet2DModel + +Pretrained models in ๐Ÿงจ Diffusers are easily created from their model class with the parameters you want. For example, to create a [`UNet2DModel`]: + +```py +>>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channes for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) +``` + +It is often a good idea to quickly check the sample image shape, and the model output shape to ensure they match: + +```py +>>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) +``` + +Great! Now you'll need a scheduler to add some noise to an image. + +## Create a scheduler + +The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - also known as a sample - from a specific point in the diffusion process and applies noise to the image using a *noise schedule* and an *update rule*. + +Let's take a look at the [`DDPMScheduler`] and use the `add_noise` method to add some random noise to the `sample_image` from before: + +```py +>>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) +``` + +
+ +
+ +The training objective of the model is to predict the noise that was added to the image. The loss at this step can be calculated by: + +```py +>>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) +``` + +## Train the model + +By now, you have most of the pieces to start training the model and all that's missing is putting everything together. + +First, you'll need an optimizer and a learning rate scheduler: + +```py +>>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) +``` + +Then, you'll need to way to evaluate the model. For this, you can use the [`DDPMPipeline`] to generate a batch of sample images and save it as a grid: + +```py +>>> from diffusers import DDPMPipeline +>>> import math + + +>>> def make_grid(images, rows, cols): +... w, h = images[0].size +... grid = Image.new("RGB", size=(cols * w, rows * h)) +... for i, image in enumerate(images): +... grid.paste(image, box=(i % cols * w, i // cols * h)) +... return grid + + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") +``` + +Wrap all these components together and write a training function with ๐Ÿค— Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, you can also write a function to get your repository name and information and then push it to the Hub. + + + +๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you want to just skip ahead and start generating images, feel free to copy and run the code below. You can always come back and examine this code more in-depth later, say like, when you're waiting for your model to finish training. ๐Ÿค— + + + +```py +>>> from accelerate import Accelerator +>>> from huggingface_hub import HfFolder, Repository, whoami +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + + +>>> def get_full_repo_name(model_id: str, organization: str = None, token: str = None): +... if token is None: +... token = HfFolder.get_token() +... if organization is None: +... username = whoami(token)["name"] +... return f"{username}/{model_id}" +... else: +... return f"{organization}/{model_id}" + + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... logging_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.push_to_hub: +... repo_name = get_full_repo_name(Path(config.output_dir).name) +... repo = Repository(config.output_dir, clone_from=repo_name) +... elif config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape).to(clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.num_train_timesteps, (bs,), device=clean_images.device +... ).long() + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True) +... else: +... pipeline.save_pretrained(config.output_dir) +``` + +Phew, that was quite a bit of code! But you're finally ready to launch the training with ๐Ÿค— Accelerate's [`~accelerate.notebook_launcher`] function. Pass the function the training loop, all the training arguments, and the number of processes (feel free to change this value to the number of GPUs available to you) to use for training: + +```py +>>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) +``` + +Once training is complete, take a look at the final images generated by your diffusion model: + +```py +>>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) +``` + +
+ +
+ +## Next steps + +Now that you understand how to train a diffusion model for unconditional image generation, you can explore other training techniques and tasks by visiting the [๐Ÿงจ Diffusers Training Examples](./training/overview) page. Here are some examples of what you can learn: + +* [Textual Inversion](./training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image. +* [Dreambooth](./training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject. +* [Guide](./training/text2image) to finetuning a Stable Diffusion model on your own dataset. +* [Guide](./training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster. \ No newline at end of file From 76959046b5788abb480220744887f6e1fd8be538 Mon Sep 17 00:00:00 2001 From: Steven Date: Thu, 23 Feb 2023 13:26:03 -0800 Subject: [PATCH 2/5] =?UTF-8?q?=E2=9C=A8=20minor=20edits?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/source/en/tutorials/basic_training.mdx | 32 ++++++++++----------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/docs/source/en/tutorials/basic_training.mdx b/docs/source/en/tutorials/basic_training.mdx index 4c160c268dcf..6244922dd1e7 100644 --- a/docs/source/en/tutorials/basic_training.mdx +++ b/docs/source/en/tutorials/basic_training.mdx @@ -20,7 +20,7 @@ This tutorial will show you how to finetune a [`UNet2DModel`] on a subset of the -This training tutorial is based on the [Training with ๐Ÿงจ Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models such as how they work, check out the notebook! +๐Ÿ’ก This training tutorial is based on the [Training with ๐Ÿงจ Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models like how they work, check out the notebook! @@ -47,7 +47,7 @@ Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.c ## Training configuration -For convenience, create a `TrainingConfig` class containing all the training hyperparameters (feel free to adjust them): +For convenience, create a `TrainingConfig` class containing the training hyperparameters (feel free to adjust them): ```py >>> from dataclasses import dataclass @@ -65,7 +65,7 @@ For convenience, create a `TrainingConfig` class containing all the training hyp ... save_image_epochs = 10 ... save_model_epochs = 30 ... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision -... output_dir = "ddpm-butterflies-128" # the model namy locally and on the HF Hub +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub ... push_to_hub = True # whether to upload the saved model to the HF Hub ... hub_private_repo = False @@ -78,7 +78,7 @@ For convenience, create a `TrainingConfig` class containing all the training hyp ## Load the dataset -With the ๐Ÿค— Datasets library, you can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset: +You can easily load the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset with the ๐Ÿค— Datasets library: ```py >>> from datasets import load_dataset @@ -89,7 +89,7 @@ With the ๐Ÿค— Datasets library, you can easily load the [Smithsonian Butterflies -You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Just replace `config.dataset_name` with the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images. +๐Ÿ’ก You can find additional datasets from the [HugGan Community Event](https://huggingface.co/huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co/docs/datasets/image_dataset#imagefolder). Set `config.dataset_name` to the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images. @@ -128,7 +128,7 @@ The images are all different sizes though, so you'll need to preprocess them fir ... ) ``` -Use ๐Ÿค— Datasets' [`~Dataset.set_transform`] method to apply the preprocessing on the fly during training: +Use ๐Ÿค— Datasets' [`~Dataset.set_transform`] method to apply `preprocess` function on the fly during training: ```py >>> def transform(examples): @@ -159,7 +159,7 @@ Pretrained models in ๐Ÿงจ Diffusers are easily created from their model class wi ... in_channels=3, # the number of input channels, 3 for RGB images ... out_channels=3, # the number of output channels ... layers_per_block=2, # how many ResNet layers to use per UNet block -... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channes for each UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block ... down_block_types=( ... "DownBlock2D", # a regular ResNet downsampling block ... "DownBlock2D", @@ -179,7 +179,7 @@ Pretrained models in ๐Ÿงจ Diffusers are easily created from their model class wi ... ) ``` -It is often a good idea to quickly check the sample image shape, and the model output shape to ensure they match: +It is often a good idea to quickly check the sample image shape matches the model output shape: ```py >>> sample_image = dataset[0]["images"].unsqueeze(0) @@ -190,11 +190,11 @@ Input shape: torch.Size([1, 3, 128, 128]) Output shape: torch.Size([1, 3, 128, 128]) ``` -Great! Now you'll need a scheduler to add some noise to an image. +Great! Next, you'll need a scheduler to add some noise to an image. ## Create a scheduler -The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - also known as a sample - from a specific point in the diffusion process and applies noise to the image using a *noise schedule* and an *update rule*. +The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a *noise schedule* and an *update rule*. Let's take a look at the [`DDPMScheduler`] and use the `add_noise` method to add some random noise to the `sample_image` from before: @@ -215,7 +215,7 @@ Let's take a look at the [`DDPMScheduler`] and use the `add_noise` method to add -The training objective of the model is to predict the noise that was added to the image. The loss at this step can be calculated by: +The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: ```py >>> import torch.nn.functional as F @@ -241,7 +241,7 @@ First, you'll need an optimizer and a learning rate scheduler: ... ) ``` -Then, you'll need to way to evaluate the model. For this, you can use the [`DDPMPipeline`] to generate a batch of sample images and save it as a grid: +Then, you'll need to way to evaluate the model. For evaluation, you can use the [`DDPMPipeline`] to generate a batch of sample images and save it as a grid: ```py >>> from diffusers import DDPMPipeline @@ -277,7 +277,7 @@ Wrap all these components together and write a training function with ๐Ÿค— Accel -๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you want to just skip ahead and start generating images, feel free to copy and run the code below. You can always come back and examine this code more in-depth later, say like, when you're waiting for your model to finish training. ๐Ÿค— +๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training looop more in-depth later, like when you're waiting for your model to finish training. ๐Ÿค— @@ -375,7 +375,7 @@ Wrap all these components together and write a training function with ๐Ÿค— Accel ... pipeline.save_pretrained(config.output_dir) ``` -Phew, that was quite a bit of code! But you're finally ready to launch the training with ๐Ÿค— Accelerate's [`~accelerate.notebook_launcher`] function. Pass the function the training loop, all the training arguments, and the number of processes (feel free to change this value to the number of GPUs available to you) to use for training: +Phew, that was quite a bit of code! But you're finally ready to launch the training with ๐Ÿค— Accelerate's [`~accelerate.notebook_launcher`] function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: ```py >>> from accelerate import notebook_launcher @@ -385,7 +385,7 @@ Phew, that was quite a bit of code! But you're finally ready to launch the train >>> notebook_launcher(train_loop, args, num_processes=1) ``` -Once training is complete, take a look at the final images generated by your diffusion model: +Once training is complete, take a look at the final ๐Ÿฆ‹ images ๐Ÿฆ‹ generated by your diffusion model! ```py >>> import glob @@ -400,7 +400,7 @@ Once training is complete, take a look at the final images generated by your dif ## Next steps -Now that you understand how to train a diffusion model for unconditional image generation, you can explore other training techniques and tasks by visiting the [๐Ÿงจ Diffusers Training Examples](./training/overview) page. Here are some examples of what you can learn: +Unconditional image generation is one example of a task that can be trained by diffusion models. You can explore other tasks and training techniques by visiting the [๐Ÿงจ Diffusers Training Examples](./training/overview) page. Here are some examples of what you can learn: * [Textual Inversion](./training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image. * [Dreambooth](./training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject. From dcb705edbe39bec18701d894d2757d07590921fc Mon Sep 17 00:00:00 2001 From: Steven Date: Tue, 28 Feb 2023 09:23:13 -0800 Subject: [PATCH 3/5] =?UTF-8?q?=E2=9C=A8=20minor=20fixes?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/source/en/tutorials/basic_training.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/en/tutorials/basic_training.mdx b/docs/source/en/tutorials/basic_training.mdx index 6244922dd1e7..0fe4f24688b6 100644 --- a/docs/source/en/tutorials/basic_training.mdx +++ b/docs/source/en/tutorials/basic_training.mdx @@ -27,7 +27,7 @@ This tutorial will show you how to finetune a [`UNet2DModel`] on a subset of the Before you begin, make sure you have ๐Ÿค— Datasets installed to load and preprocess image datasets and ๐Ÿค— Accelerate to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics. ```bash -!pip install diffusers[training]==0.11.1 +!pip install diffusers[training]==0.13.0 ``` We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!) and enter your token when prompted: @@ -128,7 +128,7 @@ The images are all different sizes though, so you'll need to preprocess them fir ... ) ``` -Use ๐Ÿค— Datasets' [`~Dataset.set_transform`] method to apply `preprocess` function on the fly during training: +Use ๐Ÿค— Datasets' [`~datasets.Dataset.set_transform`] method to apply the `preprocess` function on the fly during training: ```py >>> def transform(examples): @@ -277,7 +277,7 @@ Wrap all these components together and write a training function with ๐Ÿค— Accel -๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training looop more in-depth later, like when you're waiting for your model to finish training. ๐Ÿค— +๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more in-depth later, like when you're waiting for your model to finish training. ๐Ÿค— From 1a26db3c632590b6e93891365542918a063b5438 Mon Sep 17 00:00:00 2001 From: Steven Date: Tue, 28 Feb 2023 11:17:54 -0800 Subject: [PATCH 4/5] =?UTF-8?q?=F0=9F=96=8D=20apply=20feedbacks?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/source/en/tutorials/basic_training.mdx | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/docs/source/en/tutorials/basic_training.mdx b/docs/source/en/tutorials/basic_training.mdx index 0fe4f24688b6..612ff0875ba7 100644 --- a/docs/source/en/tutorials/basic_training.mdx +++ b/docs/source/en/tutorials/basic_training.mdx @@ -14,9 +14,9 @@ specific language governing permissions and limitations under the License. # Train a diffusion model -A popular application of diffusion models is unconditional image generation. The model generates an image that resembles the dataset it was trained on. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads), but if you can't find one you like, you can always train your own! +A popular application of diffusion models is unconditional image generation. The model generates an image that resembles the dataset it was trained on. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own! -This tutorial will show you how to finetune a [`UNet2DModel`] on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own ๐Ÿฆ‹ butterflies ๐Ÿฆ‹. +This tutorial will show you how to train a [`UNet2DModel`] from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own ๐Ÿฆ‹ butterflies ๐Ÿฆ‹. @@ -24,13 +24,13 @@ This tutorial will show you how to finetune a [`UNet2DModel`] on a subset of the -Before you begin, make sure you have ๐Ÿค— Datasets installed to load and preprocess image datasets and ๐Ÿค— Accelerate to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics. +Before you begin, make sure you have ๐Ÿค— Datasets installed to load and preprocess image datasets and ๐Ÿค— Accelerate to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training). ```bash !pip install diffusers[training]==0.13.0 ``` -We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!) and enter your token when prompted: +We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!). You can login from a notebook and enter your token when prompted: ```py >>> from huggingface_hub import notebook_login @@ -38,6 +38,12 @@ We encourage you to share your model with the community, and in order to do that >>> notebook_login() ``` +Or login in from the terminal: + +```bash +huggingface-cli login +``` + Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files: ```bash @@ -241,7 +247,7 @@ First, you'll need an optimizer and a learning rate scheduler: ... ) ``` -Then, you'll need to way to evaluate the model. For evaluation, you can use the [`DDPMPipeline`] to generate a batch of sample images and save it as a grid: +Then, you'll need a way to evaluate the model. For evaluation, you can use the [`DDPMPipeline`] to generate a batch of sample images and save it as a grid: ```py >>> from diffusers import DDPMPipeline From 2529d2eac161b3716b9c7b801453c51655f652b1 Mon Sep 17 00:00:00 2001 From: Steven Date: Fri, 3 Mar 2023 15:07:13 -0800 Subject: [PATCH 5/5] =?UTF-8?q?=F0=9F=96=8D=20apply=20feedback=20and=20min?= =?UTF-8?q?or=20edits?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/source/en/tutorials/basic_training.mdx | 22 ++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/source/en/tutorials/basic_training.mdx b/docs/source/en/tutorials/basic_training.mdx index 612ff0875ba7..1e91f81429aa 100644 --- a/docs/source/en/tutorials/basic_training.mdx +++ b/docs/source/en/tutorials/basic_training.mdx @@ -14,9 +14,9 @@ specific language governing permissions and limitations under the License. # Train a diffusion model -A popular application of diffusion models is unconditional image generation. The model generates an image that resembles the dataset it was trained on. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own! +Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co/search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own! -This tutorial will show you how to train a [`UNet2DModel`] from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own ๐Ÿฆ‹ butterflies ๐Ÿฆ‹. +This tutorial will teach you how to train a [`UNet2DModel`] from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own ๐Ÿฆ‹ butterflies ๐Ÿฆ‹. @@ -24,10 +24,10 @@ This tutorial will show you how to train a [`UNet2DModel`] from scratch on a sub -Before you begin, make sure you have ๐Ÿค— Datasets installed to load and preprocess image datasets and ๐Ÿค— Accelerate to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training). +Before you begin, make sure you have ๐Ÿค— Datasets installed to load and preprocess image datasets, and ๐Ÿค— Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training). ```bash -!pip install diffusers[training]==0.13.0 +!pip install diffusers[training] ``` We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!). You can login from a notebook and enter your token when prompted: @@ -119,7 +119,7 @@ The images are all different sizes though, so you'll need to preprocess them fir * `Resize` changes the image size to the one defined in `config.image_size`. * `RandomHorizontalFlip` augments the dataset by randomly mirroring the images. -* `Normalize` is important to rescale the pixel values into a [-1, 1] range (which our model will expect). +* `Normalize` is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. ```py >>> from torchvision import transforms @@ -196,7 +196,7 @@ Input shape: torch.Size([1, 3, 128, 128]) Output shape: torch.Size([1, 3, 128, 128]) ``` -Great! Next, you'll need a scheduler to add some noise to an image. +Great! Next, you'll need a scheduler to add some noise to the image. ## Create a scheduler @@ -232,7 +232,7 @@ The training objective of the model is to predict the noise added to the image. ## Train the model -By now, you have most of the pieces to start training the model and all that's missing is putting everything together. +By now, you have most of the pieces to start training the model and all that's left is putting everything together. First, you'll need an optimizer and a learning rate scheduler: @@ -279,11 +279,11 @@ Then, you'll need a way to evaluate the model. For evaluation, you can use the [ ... image_grid.save(f"{test_dir}/{epoch:04d}.png") ``` -Wrap all these components together and write a training function with ๐Ÿค— Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, you can also write a function to get your repository name and information and then push it to the Hub. +Now you can wrap all these components together in a training loop with ๐Ÿค— Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. -๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more in-depth later, like when you're waiting for your model to finish training. ๐Ÿค— +๐Ÿ’ก The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you're waiting for your model to finish training. ๐Ÿค— @@ -406,9 +406,9 @@ Once training is complete, take a look at the final ๐Ÿฆ‹ images ๐Ÿฆ‹ generated b ## Next steps -Unconditional image generation is one example of a task that can be trained by diffusion models. You can explore other tasks and training techniques by visiting the [๐Ÿงจ Diffusers Training Examples](./training/overview) page. Here are some examples of what you can learn: +Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the [๐Ÿงจ Diffusers Training Examples](./training/overview) page. Here are some examples of what you can learn: * [Textual Inversion](./training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image. -* [Dreambooth](./training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject. +* [DreamBooth](./training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject. * [Guide](./training/text2image) to finetuning a Stable Diffusion model on your own dataset. * [Guide](./training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster. \ No newline at end of file