POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/sd3-med-img2img" # Request payload data = { "prompt": "photo of a boy holding phone on table,3d pixar style", "negative_prompt": "low quality,less details", "image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/sd3-img2img-ip.jpg"), # Or use image_file_to_base64("IMAGE_PATH") "num_inference_steps": 20, "guidance_scale": 5, "seed": 698845, "samples": 1, "strength": 0.7, "sampler": "dpmpp_2m", "scheduler": "sgm_uniform", "base64": False } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr *

Text prompt for image generation


negative_promptstr ( default: low quality,less details )

Negative text prompt to avoid certain qualities


imageimage *

Input image


num_inference_stepsint ( default: 20 )

Number of inference steps for image generation

min : 1,

max : 100


guidance_scalefloat ( default: 5 )

Guidance scale for image generation

min : 1,

max : 20


seedint ( default: 698845 )

Seed for random number generation


samplesint ( default: 1 )

Number of samples to generate


strengthfloat ( default: 0.7 )

Strength of the image transformation

min : 0,

max : 1


samplerenum:str ( default: dpmpp_2m )

Sampler for the image generation process

Allowed values:


schedulerenum:str ( default: sgm_uniform )

Scheduler for the image generation process

Allowed values:


base64bool ( default: 1 )

Base64 encoding of the output image

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stable Diffusion 3 Medium Image-to-Image

Stable Diffusion 3 Medium is a cutting-edge AI tool that uses advanced image-to-image technology to transform one image into another. It's powered with 2 billion parameters, letting it generate top-tier, realistic images by processing an initial image and a text prompt.

  • Capabilities: High-quality image transformations with efficient resource management, allowing operation on consumer-grade GPUs. It also provides adjustable transformation strengths to fine-tune outputs.

  • Creators:The model was developed by Stability AI.

  • Training Data Info: The details of the training data remain undisclosed, but it uses large and diverse image datasets.

  • Technical Architecture: The core architecture is based on a Diffusion Transformer, allowing complex image transformations.

  • Strengths: Exceptional image transformation quality, with broad creative possibilities. It's also optimized for efficient performance.

How to Use Stable Diffusion 3 Medium in Image-to-Image?

Step-by-Step Guide:

  1. Input Image: Click on the upload area, and upload an image in PNG, JPG, or GIF format, with a maximum resolution of 2048x2048 pixels.

  2. Set the Prompt: Enter a descriptive text prompt in the field to guide the image transformation.

  3. Seed: Optionally, set a seed value. Check the "Randomize Seed" box for unique outputs each time.

  4. Strength: Adjust the 'Strength' parameter to control how much the generated image should follow the input image.

  5. Negative Prompt: Enter text in the "Negative Prompt" field to specify what to avoid.

  6. Set Advanced Parameters: Control the number of refinement steps with 'Inference Steps'. 'Guidance Scale' balances between the prompt and generating unique images. Choose the method for the diffusion process with 'Sampler'. Lastly, select the scheduling algorithm for the diffusion process with 'Scheduler'.

  7. Generate: Click the "Generate" button to start the image generation process. The output image will appear once generation is complete.