1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/ssd-canny"
# Request payload
data = {
"image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/outputs/ssd_canny_input.png"), # Or use image_file_to_base64("IMAGE_PATH")
"prompt": "beautiful lady, scandanavian, natural skin, big smile, black hair, dark makeup, wearing a black top, hyperdetailed photography, sharp focus on face, soft light, dark background, head and shoulders portrait, cover, city bokeh background",
"negative_prompt": "low quality, ugly, painting",
"samples": 1,
"scheduler": "UniPC",
"num_inference_steps": 30,
"guidance_scale": 7.5,
"seed": 760941192,
"controlnet_scale": 0.5,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Input Image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Number of samples to generate.
min : 1,
max : 4
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 10,
max : 100
Scale for classifier-free guidance
min : 1,
max : 25
Seed for image generation.
min : -1,
max : 999999999999999
Scale for classifier-free guidance
min : 0,
max : 1
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
The Segmind Stable Diffusion 1B (SSD-1B) Canny Model empowers users to transform images with an unprecedented level of control over edge detection parameters, allowing for the meticulous accentuation and definition of edges within any image.
The SSD-1B Canny model is built upon the solid foundation of canny edge detection, renowned for its precision in highlighting the contours within images. This transformative model is engineered to fine-tune edge detection, offering users the flexibility to adjust parameters to their exact specifications. Whether aiming for subtle texture enhancements or dramatic edge definitions, the SSD-1B Canny model stands ready to deliver.
Accurate Edge Detection: Harnesses the renowned canny edge detection for precise edge delineation.
Customizable Control: Provides users with extensive control to customize edge detection to their preferences.
Adaptable Use Cases: Versatile across various applications, from artistic endeavors to technical image analysis.
Immediate Results: Delivers real-time manipulation, offering instant feedback and swift results.
Seamless Integration: Crafted for easy incorporation into diverse platforms, enhancing both image editing solutions and computer vision systems.
Enhanced Image Segmentation: Essential for tasks requiring exact edge detection, ensuring sharp and accurate segmentation..
Focused Image Enhancement: Enables users to bring particular edges into the spotlight, improving image clarity and emphasis.
Creative Visual Effects: Provides artists with the capability to craft striking visual effects through edge manipulation.
Advanced Editing Features: Can be integrated into image editing software, granting advanced edge refinement tools to users..