POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/flux-img2img" # Request payload data = { "prompt": "anime style", "image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/flux-i2i-ip.png"), # Or use image_file_to_base64("IMAGE_PATH") "steps": 20, "seed": 46588, "denoise": 0.75, "scheduler": "simple", "sampler_name": "euler", "base64": False } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr *

Text prompt describing the desired image style


imageimage *

URL of the input image


stepsint *

Number of inference steps

min : 1,

max : 50


seedint ( default: 46588 )

Seed for random number generation


denoisefloat *

Denoising strength

min : 0,

max : 1


schedulerenum:str *

Scheduler type

Allowed values:


sampler_nameenum:str *

Sampler name

Allowed values:


base64bool ( default: 1 )

Output as base64 encoded string

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Flux.1 Image to Image

The Flux Image-To-Image model by Black Forest Labs is an advanced deep learning tool designed for transforming images based on specific textual prompts. This powerful model leverages a 12 billion parameter rectified flow transformer, enabling users to create detailed modifications and enhancements to their images.

  • Enhanced Image Quality: Generate stunning visuals at higher resolutions.

  • Advanced Human Anatomy and Photorealism: Achieve highly realistic and anatomically accurate images.

  • Improved Prompt Adherence: Get more accurate and relevant images based on your inputs.

How to Use the Flux Image-to-Image

  1. Upload Image: Start by uploading the image you want to transform. The model supports PNG or JPG formats up to 2048 x 2048 pixels.

  2. Input Text Prompt: Enter a descriptive prompt detailing the transformation or enhancements you want applied to the image.

  3. Adjust Parameters: Fine-tune the model's various parameters to achieve the desired output.

Use Cases of Flux Image-to-Image

  • Photo Editing: Enhance or completely transform photographs by integrating new elements and styles as described in textual prompts.

  • Artwork Refinement: Modify existing art pieces by adding or altering features based on descriptive inputs, aiding artists in iterative creation processes.

  • Marketing and Advertising: Generate multiple versions of product visuals or advertising content, catering to different themes and client requirements.

  • Creative Prototyping: Assist in the development stages of design projects, where existing images need iterative adjustments to match evolving concepts.