SD3 Medium Tile Controlnet

SD3 Medium Tile ControlNet is a large generative image model designed for generating detailed images based on textual prompts and tile-based input images.


API

If you're looking for an API, you can choose from your desired programming language.

POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/sd3-med-tile" # Request payload data = { "prompt": "Anime style illustration of a girl wearing a suit. A moon in sky. In the background we see a big rain approaching.", "negative_prompt": "low quality,less details", "image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/sd3m-controlnet/sd3-tile.jpg"), # Or use image_file_to_base64("IMAGE_PATH") "num_inference_steps": 20, "guidance_scale": 7, "seed": 698845, "samples": 1, "strength": 0.8, "sampler": "dpmpp_2m", "scheduler": "sgm_uniform", "base64": False } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr *

Text prompt for image generation


negative_promptstr ( default: low quality,less details )

Negative text prompt to avoid certain qualities


imageimage *

Input image


num_inference_stepsint ( default: 20 )

Number of inference steps for image generation

min : 1,

max : 100


guidance_scalefloat ( default: 7 )

Guidance scale for image generation

min : 1,

max : 20


seedint ( default: 698845 )

Seed for random number generation


samplesint ( default: 1 )

Number of samples to generate


strengthfloat ( default: 0.8 )

Strength of the image transformation

min : 0,

max : 1


samplerenum:str ( default: dpmpp_2m )

Sampler for the image generation process

Allowed values:


schedulerenum:str ( default: sgm_uniform )

Scheduler for the image generation process

Allowed values:


base64bool ( default: 1 )

Base64 encoding of the output image

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stable Diffusion 3 (SD3) Tile ControlNet

SD3 Medium Tile ControlNet is an advanced deep learning model designed for generating detailed images based on textual prompts and tile-based input images. By using tiling techniques, this model can create coherent large-scale images with a high level of detail and consistency. SD3 Medium Tile ControlNet is ideal for scenarios requiring expansive and detailed visual outputs.

How to Use the Model?

  1. Input Prompts: Provide a textual description of the desired image in the "Prompt" field.

  2. Input Image: Upload an image to guide the generation process.

  3. Negative Prompts: Indicate elements to exclude from the generation.

  4. Inference Steps: Set the number of steps for the model to refine the image. More steps typically result in higher quality.

  5. Strength: Adjust this to control how strongly the input image influences the generated output. Higher values will make the output more similar to the input tiles.

  6. Seed: Define a seed value for reproducibility. Randomly generate seeds if consistency is not required.

  7. Guidance Scale: Adjusts how closely the generated image follows the prompt. Higher values ensure the image aligns closely with the prompt.

How to Fine-Tune Outputs?

Fine-tuning the outputs can be achieved by adjusting several parameters:

  1. Inference Steps: Increasing the number of steps (e.g., from 20 to 50) can generate finer details but at the cost of longer processing times.

  2. Strength: Adjust the strength to control the influence of the input image. For minor adjustments, vary between 0.6 to 0.9. Lower values provide more creative freedom to the model.

  3. Guidance Scale: Typically between 7 and 15. Use higher values for strict adherence to prompts and lower values for more abstract results.

  4. Sampler: Different samplers (e.g., ddim, p_sampler) can affect the generation style and speed. Experiment with these to find the optimal balance for your use case.

Use Cases

SD3 Medium Tile ControlNet can be effectively used for various applications, such as:

  • Architectural Visualization: Generate detailed floorplans, facades, and landscape designs from textual descriptions and input tiles.

  • Game Design: Create expansive and coherent game maps and environments.

  • Graphic Design: Produce large-format graphics and posters with consistent detailing.

  • Marketing Materials: Develop intricate and high-quality visual content for marketing campaigns.