If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/flux-canny-dev"
# Prepare data and files
data = {}
files = {}
data['seed'] = 632558
data['prompt'] = "Mystical fairy in real, magic, 4k picture, high quality"
data['guidance'] = 30
data['megapixels'] = "1"
data['num_outputs'] = 1
# For parameter "control_image", you can send a raw file or a URI:
# files['control_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['control_image'] = 'IMAGE_URI' # To send a URI
data['output_format'] = "jpg"
data['output_quality'] = 80
data['num_inference_steps'] = 28
data['disable_safety_checker'] = False
headers = {'x-api-key': api_key}
response = requests.post(url, data=data, files=files, headers=headers)
print(response.content) # The response is the generated image
Random seed. Set for reproducible generation
Prompt for generated image
Guidance for generated image
min : 0,
max : 100
Approximate number of megapixels for generated image. Use match_input to match the size of the input (with an upper limit of 1440x1440 pixels)
Allowed values:
Number of outputs to generate
min : 1,
max : 4
Image used to control the generation. The canny edge detection will be automatically generated.
Format of the output images
Allowed values:
Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
min : 0,
max : 100
Number of denoising steps. Recommended range is 28-50, and lower number of steps produce lower quality outputs, faster.
min : 1,
max : 50
Disable safety checker for generated images.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Flux Canny Dev is an advanced AI model developed by Black Forest Labs, designed to enhance image generation by integrating edge detection techniques. This model leverages the Canny edge detection algorithm to identify and emphasize the structural outlines within images, resulting in outputs with enhanced clarity and definition.
Edge Detection Integration: Utilizes the Canny algorithm to detect edges, ensuring that generated images maintain sharp and well-defined structures.
Enhanced Image Clarity: By focusing on prominent edges, the model produces images with improved clarity and detail, making them more visually appealing.
Versatile Application: Suitable for various creative projects, including digital art, graphic design, and content creation, where precise image details are crucial.
Improved Visual Quality: The integration of edge detection leads to images with better-defined features, enhancing the overall visual quality.
Consistency in Output: Ensures that generated images consistently exhibit clear and sharp edges, maintaining a high standard across different projects.
Efficiency in Workflow: Streamlines the image generation process by automatically emphasizing structural details, reducing the need for manual adjustments.
Digital Art Creation: Assists artists in producing detailed and sharp images, enhancing the artistic quality of digital artworks.
Graphic Design: Enables designers to create visuals with clear outlines, improving the effectiveness of design elements.
Content Generation: Facilitates the creation of high-quality images for various content needs, including marketing materials and social media posts.
By incorporating the Canny edge detection algorithm, Flux Canny Dev offers a powerful tool for professionals seeking to generate images with enhanced structural clarity and detail.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.