If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/storydiffusion"
# Prepare data and files
data = {}
files = {}
data['seed'] = 42
data['num_ids'] = 3
data['sd_model'] = "Unstable"
data['num_steps'] = 25
# For parameter "ref_image", you can send a raw file or a URI:
# files['ref_image'] = open('IMAGE_PATH', 'rb') # To send a file
# data['ref_image'] = 'IMAGE_URI' # To send a URI
data['image_width'] = 768
data['image_height'] = 768
data['sa32_setting'] = 0.5
data['sa64_setting'] = 0.5
data['output_format'] = "webp"
data['guidance_scale'] = 5
data['output_quality'] = 80
data['negative_prompt'] = "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs"
data['character_description'] = "a man, wearing black suit"
data['comic_description'] = "at home, read new paper #at home, The newspaper says there is a treasure house in the forest.\non the road, near the forest\n[NC] The car on the road, near the forest #He drives to the forest in search of treasure.\n[NC]A tiger appeared in the forest, at night \nvery frightened, open mouth, in the forest, at night\nrunning very fast, in the forest, at night\n[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!\nin the house filled with treasure, laughing, at night #He is overjoyed inside the house."
data['style_strength_ratio'] = 20
data['style_name'] = "Disney Charactor"
data['comic_style'] = "Classic Comic Style"
headers = {'x-api-key': api_key}
response = requests.post(url, data=data, files=files, headers=headers)
print(response.content) # The response is the generated image
Random seed. Leave blank to randomize the seed
Number of id images in total images. This should not exceed total number of line-separated prompts
Allowed values:
Number of sample steps
min : 20,
max : 50
Reference image for the character
Allowed values:
Allowed values:
The degree of Paired Attention at 32 x 32 self-attention layers
min : 0,
max : 1
The degree of Paired Attention at 64 x 64 self-attention layers
min : 0,
max : 1
Allowed values:
Scale for classifier-free guidance
min : 0.1,
max : 10
Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality
min : 0,
max : 100
Describe things you do not want to see in the output
Add triger word 'img' when using ref_image. General description of the character. If ref_image above is provided, making sure to follow the class word you want to customize with the trigger word 'img', such as: 'man img' or 'woman img' or 'girl img'
Remove [NC] when using Ref_image Each frame is divided by a new line. Only the first 10 prompts are valid for demo speed! For comic_description NOT using ref_image: (1) Support Typesetting Style and Captioning. By default, the prompt is used as the caption for each image. If you need to change the caption, add a '#' at the end of each line. Only the part after the '#' will be added as a caption to the image. (2) The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the '[NC]' at the beginning of the line.
Style strength of Ref Image (%), only used if ref_image is provided
min : 15,
max : 50
Allowed values:
Allowed values:
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Story Diffusion leverages the power of diffusion models to generate a series of images that cohesively depict your story's scenes. It's like having a visual effects team at your fingertips, translating your words into a captivating visual experience. Story Diffusion goes beyond simply generating individual images. Its core strength lies in maintaining consistency across the entire sequence. Characters, settings, and overall mood remain thematically linked, ensuring a visually cohesive story arc.
This innovative model has far-reaching implications for various creative fields:
Concept Art and Illustration: Story Diffusion empowers artists and designers by generating visual references that perfectly capture the essence of their ideas. It acts as a springboard for further creative exploration.
Storyboarding and Pre-visualization: Filmmakers and animators can use Story Diffusion to create dynamic storyboards that visualize key scenes and plot points. This streamlines the pre-production process, saving time and resources.
Graphic Novels and Comics: Breathe life into static panels with Story Diffusion. Generate visuals that showcase dynamic action sequences and character emotions, enhancing the reading experience.
Interactive Storytelling: Integrate Story Diffusion into interactive storytelling platforms. Users can shape the narrative, and the model generates corresponding visuals on the fly, creating a truly personalized and engaging experience.
Story Diffusion turns your written narratives into stunning image sequences.
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training