Stable Diffusion Inpainting

Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask


API

If you're looking for an API, you can choose from your desired programming language.

POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/sd1.5-inpainting" # Request payload data = { "prompt": "mecha robot sitting on a bench", "negative_prompt": "Disfigured, cartoon, blurry, nude", "samples": 1, "image": image_url_to_base64("https://segmind.com/inpainting-input-image.jpeg"), # Or use image_file_to_base64("IMAGE_PATH") "mask": image_url_to_base64("https://segmind.com/inpainting-input-mask.jpeg"), # Or use image_file_to_base64("IMAGE_PATH") "scheduler": "DDIM", "num_inference_steps": 25, "guidance_scale": 7.5, "strength": 1, "seed": 17123564234, "img_width": 512, "img_height": 512 } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr *

Prompt to render


negative_promptstr ( default: None )

Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'


samplesint ( default: 1 ) Affects Pricing

Number of samples to generate.

min : 1,

max : 4


imageimage *

Input Image.


maskimage *

Mask Image


schedulerenum:str ( default: DDIM )

Type of scheduler.

Allowed values:


num_inference_stepsint ( default: 20 ) Affects Pricing

Number of denoising steps.

min : 20,

max : 100


guidance_scalefloat ( default: 7.5 )

Scale for classifier-free guidance

min : 0.1,

max : 25


strengthfloat ( default: 1 )

How much to transform the reference image

min : 0.1,

max : 1


seedint ( default: -1 )

Seed for image generation.


img_widthenum:int ( default: 512 ) Affects Pricing

Image resolution.

Allowed values:


img_heightenum:int ( default: 512 ) Affects Pricing

Image resolution.

Allowed values:

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Segmind Tiny-SD: A Faster, More Efficient Text-to-Image Model

Meet Segmind Tiny-SD, a breakthrough product by Segmind that is pushing the boundaries of generative AI models. As part of Segmind's ongoing commitment to making AI more accessible, we have released this new compact and accelerated Stable Diffusion model, which is open-sourced on Huggingface. Following the innovative research from the paper "On Architectural Compression of Text-to-Image Diffusion Models", Segmind has refined the idea and introduced two compact models: SD-Small and SD-Tiny. Each model exhibits a reduction in parameters while maintaining comparable image fidelity, with SD-Tiny achieving a reduction of 55% compared to the base model.

The technical architecture of Segmind Tiny-SD revolves around the concept of Knowledge Distillation (KD), akin to a teacher-student relationship in the world of AI. An expansive, pre-trained model (the teacher) guides a smaller model (the student) through the process of training on a smaller dataset. The unique aspect of this architecture is the incorporation of block-level output matching from the teacher model, which enables preservation of the model quality while reducing its size. The KD process involves a multi-component loss function that considers the traditional loss, the loss between the teacher and student generated latents, and importantly, the feature-level loss - the discrepancy between the block outputs of the teacher and student models.

The significant advantage of the Segmind Tiny-SD model lies in its speed and efficiency. With up to 85% faster inferences, these models are designed to drastically slash the time needed to generate results, delivering both superior performance and cost-effectiveness. The reduced size does not compromise the quality of the images, making this an ideal solution for tasks requiring high-quality image generation at a faster pace.

Segmind Tiny-SD use cases

  1. Content Generation: Quick generation of high-quality images for digital content like blogs, social media posts, and more.

  2. Game Development: Efficient and creative generation of unique game assets for independent and AAA game developers.

  3. Personalized Marketing: Faster generation of personalized visual content for digital marketing campaigns, enhancing customer engagement.

  4. AI Art and Design: Artists and designers can use it to create unique, AI-assisted visual content in less time.

  5. Research and Development: In various AI research domains, quick inference can accelerate experimentation, enabling faster progress and discovery.

Segmind Tiny-SD License

Segmind Tiny-SD is licensed under CreativeML Open RAIL-M. This license encourages both the open and responsible use of the model. It is inspired by permissive open-source licenses in terms of granting IP rights while also adding use-based restrictions to prevent misuse of the technology, be it due to technical limitations or ethical considerations. While derivative versions of the model can be released under different licensing terms, they must always include the same use-based restrictions as the original license. Thus, the license strikes a balance between open and responsible AI development, promoting open-science in the field of AI while simultaneously addressing potential misuse.