Qwen-Image-Edit-Plus supports multi-image input and output. It can accurately modify text in an image, add, delete, or move objects, change the action of a subject, transfer an image style, and enhance image details.
Examples
Multi-image fusion
|
|
|
|
|
Input image 1 | Input image 2 | Input image 3 | Make the girl from Image 1 wear the black dress from Image 2 and sit in the pose from Image 3. | |
|
|
|
|
|
Input image 1 | Input image 2 | Input image 3 | Make the girl from Image 1 wear the necklace from Image 2 and carry the bag from Image 3 on her left shoulder. | |
Single-image editing
|
|
|
|
Original image | Generate an image that matches the depth map, following this description: A dilapidated red bicycle is parked on a muddy path with a dense primeval forest in the background. | Original image | Replace the words "HEALTH INSURANCE" on the letter blocks with "明天会更好". |
|
|
|
|
Original image | Replace the dotted shirt with a light blue shirt. | Original image | Change the background of the image to Antarctica. |
HTTP
Before making a call, obtain an API key and set the API key as an environment variable.
To make calls using the SDK, install the DashScope SDK. The SDK is available for Python and Java.
The Beijing and Singapore regions have separate API keys and request endpoints. Do not use them interchangeably. Cross-region calls cause authentication failures or service errors.
Singapore region:POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation
Beijing region:POST https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation
Request parameters | Single-image editingThis example uses the The URL in this example is for the Singapore region. If you use a model in the Beijing region, replace the URL with: Multi-image fusionThis example uses the The URL in this example is for the Singapore region. If you use a model in the Beijing region, replace the URL with: |
Headers | |
Content-Type The content type of the request. Set this parameter to | |
Authorization The identity authentication credentials for the request. This API uses an Model Studio API key for identity authentication. Example: Bearer sk-xxxx. | |
Request body | |
model The model to use. The following models are available:
| |
input The input object, which contains the following fields: | |
parameters Additional parameters to control image generation. |
Response parameters | Successful task executionTask data, such as the task status and image URLs, is retained for only 24 hours and is automatically purged after this period. You must save the generated images promptly. Abnormal task executionIf a task fails, the response returns relevant information. You can identify the cause of the failure from the `code` and `message` fields. For more information about how to resolve errors, see Error codes. |
output The results generated by the model. | |
usage The resource usage for this request. This parameter is returned only when the request is successful. | |
request_id The unique request ID. You can use this ID to trace and troubleshoot issues. | |
code The error code for a failed request. This parameter is not returned if the request is successful. For more information, see Error messages. | |
message The detailed information about a failed request. This parameter is not returned if the request is successful. For more information, see Error messages. |
DashScope SDK
The SDK parameter names are mostly consistent with the HTTP API. The parameter structure is adapted for each programming language. For a complete list of parameters, see the Qwen API reference.
Python SDK
Install the latest version of the DashScope Python SDK. Otherwise, runtime errors may occur: Install or upgrade the SDK.
Asynchronous APIs are not supported.
Request examples
This example uses the qwen-image-edit-plus model to generate two images.
Pass an image using a public URL
import json
import os
import dashscope
from dashscope import MultiModalConversation
# The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# The model supports one to three input images.
messages = [
{
"role": "user",
"content": [
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"},
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"},
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"},
{"text": "Make the girl from Image 1 wear the black dress from Image 2 and sit in the pose from Image 3."}
]
}
]
# The API Keys for the Singapore and Beijing regions are different. Get an API Key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
# If you have not configured the environment variable, replace the following line with your Model Studio API Key: api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY")
# The model supports only single-turn conversations and reuses the multi-turn conversation API.
# qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
response = MultiModalConversation.call(
api_key=api_key,
model="qwen-image-edit-plus",
messages=messages,
stream=False,
n=2,
watermark=False,
negative_prompt=" ",
prompt_extend=True,
# The size parameter is supported only when the number of output images n is 1. Otherwise, an error is reported.
# size="1024*2048",
)
if response.status_code == 200:
# To view the full response, uncomment the following line.
# print(json.dumps(response, ensure_ascii=False))
for i, content in enumerate(response.output.choices[0].message.content):
print(f"URL of output image {i+1}: {content['image']}")
else:
print(f"HTTP status code: {response.status_code}")
print(f"Error code: {response.code}")
print(f"Error message: {response.message}")
print("For more information, see the documentation: https://www.alibabacloud.com/help/en/model-studio/error-code")
Pass an image using Base64 encoding
import json
import os
import dashscope
from dashscope import MultiModalConversation
import base64
import mimetypes
# The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# --- For Base64 encoding ---
# Format: data:{mime_type};base64,{base64_data}
def encode_file(file_path):
mime_type, _ = mimetypes.guess_type(file_path)
if not mime_type or not mime_type.startswith("image/"):
raise ValueError("Unsupported or unrecognized image format")
try:
with open(file_path, "rb") as image_file:
encoded_string = base64.b64encode(
image_file.read()).decode('utf-8')
return f"data:{mime_type};base64,{encoded_string}"
except IOError as e:
raise IOError(f"Error reading file: {file_path}, Error: {str(e)}")
# Get the Base64 encoding of the image.
# Call the encoding function. Replace "/path/to/your/image.png" with the path to your local image file. Otherwise, the code will not run.
image = encode_file("/path/to/your/image.png")
messages = [
{
"role": "user",
"content": [
{"image": image},
{"text": "Generate an image that matches the depth map, following this description: A dilapidated red bicycle is parked on a muddy path with a dense primeval forest in the background."}
]
}
]
# The API Keys for the Singapore and Beijing regions are different. Get an API Key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
# If you have not configured the environment variable, replace the following line with your Model Studio API Key: api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY")
# The model supports only single-turn conversations and reuses the multi-turn conversation API.
# qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
response = MultiModalConversation.call(
api_key=api_key,
model="qwen-image-edit-plus",
messages=messages,
stream=False,
n=2,
watermark=False,
negative_prompt=" ",
prompt_extend=True,
# The size parameter is supported only when the number of output images n is 1. Otherwise, an error is reported.
# size="2048*1024",
)
if response.status_code == 200:
# To view the full response, uncomment the following line.
# print(json.dumps(response, ensure_ascii=False))
for i, content in enumerate(response.output.choices[0].message.content):
print(f"URL of output image {i+1}: {content['image']}")
else:
print(f"HTTP status code: {response.status_code}")
print(f"Error code: {response.code}")
print(f"Error message: {response.message}")
print("For more information, see the documentation: https://www.alibabacloud.com/help/en/model-studio/error-code")
Download an image from a URL
# You need to install requests to download the image: pip install requests
import requests
def download_image(image_url, save_path='output.png'):
try:
response = requests.get(image_url, stream=True, timeout=300) # Set timeout
response.raise_for_status() # Raise an exception if the HTTP status code is not 200.
with open(save_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
print(f"Image downloaded successfully to: {save_path}")
except requests.exceptions.RequestException as e:
print(f"Image download failed: {e}")
image_url = "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
download_image(image_url, save_path='output.png')
Response example
The image link is valid for 24 hours. Download the image promptly.
input_tokensandoutput_tokensare compatibility fields. Their values are currently fixed at 0.
{
"status_code": 200,
"request_id": "121d8c7c-360b-4d22-a976-6dbb8bxxxxxx",
"code": "",
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
},
{
"image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
}
]
}
}
]
},
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"height": 1248,
"image_count": 2,
"width": 832
}
}Java SDK
Install the latest version of the DashScope Java SDK. Otherwise, a runtime error may occur: Install or upgrade the SDK.
Request examples
The following example shows how to use the qwen-image-edit-plus model to generate two images.
Pass an image using a public URL
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.utils.Constants;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.List;
public class QwenImageEdit {
static {
// The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
}
// The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
// If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey="sk-xxx"
static String apiKey = System.getenv("DASHSCOPE_API_KEY");
public static void call() throws ApiException, NoApiKeyException, UploadFileException, IOException {
MultiModalConversation conv = new MultiModalConversation();
// The model supports one to three input images.
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"),
Collections.singletonMap("text", "The girl in image 1 is wearing the black dress from image 2 and sitting in the pose from image 3.")
)).build();
// qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
Map<String, Object> parameters = new HashMap<>();
parameters.put("watermark", false);
parameters.put("negative_prompt", " ");
parameters.put("n", 2);
parameters.put("prompt_extend", true);
// The size parameter is supported only when the number of output images n is 1. Otherwise, an error is reported.
// parameters.put("size", "1024*2048");
MultiModalConversationParam param = MultiModalConversationParam.builder()
.apiKey(apiKey)
.model("qwen-image-edit-plus")
.messages(Collections.singletonList(userMessage))
.parameters(parameters)
.build();
MultiModalConversationResult result = conv.call(param);
// To view the complete response, uncomment the following line.
// System.out.println(JsonUtils.toJson(result));
List<Map<String, Object>> contentList = result.getOutput().getChoices().get(0).getMessage().getContent();
int imageIndex = 1;
for (Map<String, Object> content : contentList) {
if (content.containsKey("image")) {
System.out.println("URL of output image " + imageIndex + ": " + content.get("image"));
imageIndex++;
}
}
}
public static void main(String[] args) {
try {
call();
} catch (ApiException | NoApiKeyException | UploadFileException | IOException e) {
System.out.println(e.getMessage());
}
}
}Pass an image using Base64 encoding
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.utils.Constants;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Base64;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.List;
public class QwenImageEdit {
static {
// The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
}
// The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
// If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey="sk-xxx"
static String apiKey = System.getenv("DASHSCOPE_API_KEY");
public static void call() throws ApiException, NoApiKeyException, UploadFileException, IOException {
// Replace "/path/to/your/image.png" with the path to your local image file. Otherwise, the code will not run.
String image = encodeFile("/path/to/your/image.png");
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(
Collections.singletonMap("image", image),
Collections.singletonMap("text", "Generate an image that matches the depth map and follows this description: A dilapidated red bicycle is parked on a muddy path, with a dense primeval forest in the background.")
)).build();
// qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
Map<String, Object> parameters = new HashMap<>();
parameters.put("watermark", false);
parameters.put("negative_prompt", " ");
parameters.put("n", 2);
parameters.put("prompt_extend", true);
// The size parameter is supported only when the number of output images n is 1. Otherwise, an error is reported.
// parameters.put("size", "2048*1024");
MultiModalConversationParam param = MultiModalConversationParam.builder()
.apiKey(apiKey)
.model("qwen-image-edit-plus")
.messages(Collections.singletonList(userMessage))
.parameters(parameters)
.build();
MultiModalConversationResult result = conv.call(param);
// To view the complete response, uncomment the following line.
// System.out.println(JsonUtils.toJson(result));
List<Map<String, Object>> contentList = result.getOutput().getChoices().get(0).getMessage().getContent();
int imageIndex = 1;
for (Map<String, Object> content : contentList) {
if (content.containsKey("image")) {
System.out.println("URL of output image " + imageIndex + ": " + content.get("image"));
imageIndex++;
}
}
}
/**
* Encodes a file into a Base64 string.
* @param filePath The file path.
* @return A Base64 string in the format: data:{mime_type};base64,{base64_data}
*/
public static String encodeFile(String filePath) {
Path path = Paths.get(filePath);
if (!Files.exists(path)) {
throw new IllegalArgumentException("File does not exist: " + filePath);
}
// Detect the MIME type.
String mimeType = null;
try {
mimeType = Files.probeContentType(path);
} catch (IOException e) {
throw new IllegalArgumentException("Cannot detect the file type: " + filePath);
}
if (mimeType == null || !mimeType.startsWith("image/")) {
throw new IllegalArgumentException("Unsupported or unrecognized image format.");
}
// Read the file content and encode it.
byte[] fileBytes = null;
try{
fileBytes = Files.readAllBytes(path);
} catch (IOException e) {
throw new IllegalArgumentException("Cannot read the file content: " + filePath);
}
String encodedString = Base64.getEncoder().encodeToString(fileBytes);
return "data:" + mimeType + ";base64," + encodedString;
}
public static void main(String[] args) {
try {
call();
} catch (ApiException | NoApiKeyException | UploadFileException | IOException e) {
System.out.println(e.getMessage());
}
}
}Download an image from a URL
import java.io.FileOutputStream;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
public class ImageDownloader {
public static void downloadImage(String imageUrl, String savePath) {
try {
URL url = new URL(imageUrl);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setConnectTimeout(5000);
connection.setReadTimeout(300000);
connection.setRequestMethod("GET");
InputStream inputStream = connection.getInputStream();
FileOutputStream outputStream = new FileOutputStream(savePath);
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
inputStream.close();
outputStream.close();
System.out.println("Image downloaded successfully to: " + savePath);
} catch (Exception e) {
System.err.println("Image download failed: " + e.getMessage());
}
}
public static void main(String[] args) {
String imageUrl = "http://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxx?Expires=xxx";
String savePath = "output.png";
downloadImage(imageUrl, savePath);
}
}Response example
The image link is valid for 24 hours. Download the image promptly.
{
"requestId": "46281da9-9e02-941c-ac78-be88b8xxxxxx",
"usage": {
"image_count": 2,
"width": 1216,
"height": 864
},
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
},
{
"image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
}
]
}
}
]
}
}Billing and rate limiting
Singapore region
Model | Unit price | Rate limit (shared by Alibaba Cloud account and RAM users) | Free quota (View) | |
Requests per second (RPS) limit | Number of concurrent tasks | |||
qwen-image-edit-plus Currently same capabilities as qwen-image-edit-plus-2025-10-30 | $0.03/image | 2 | No limit for sync APIs | 100 images |
qwen-image-edit-plus-2025-10-30 | $0.03/image | 2 | No limit for sync APIs | 100 images |
qwen-image-edit | $0.045/image | 2 | No limit for sync APIs | 100 images |
Beijing region
Model | Unit price | Rate limit (shared by Alibaba Cloud account and RAM users) | Free quota (View) | |
Requests per second (RPS) limit | Number of concurrent tasks | |||
qwen-image-edit-plus Currently same capabilities as qwen-image-edit-plus-2025-10-30 | $0.028671/image | 2 | No limit for sync APIs | No free quota |
qwen-image-edit-plus-2025-10-30 | $0.028671/image | 2 | No limit for sync APIs | |
qwen-image-edit | $0.043/image | 2 | No limit for sync APIs | |
Billing description:
You are charged based on the number of images that are successfully generated. If a single request returns n images, the charge for that request is n * the unit price.
Failed model calls or processing errors do not incur fees or consume the free quota.
You can enable the "Free quota only" feature to avoid additional charges after your free quota is exhausted. For more information, see Free quota for new users.
Configure image access permission
Images generated by the model are stored in Object Storage Service (OSS). Each image is assigned a publicly accessible OSS link, such as https://dashscope-result-xx.oss-cn-xxxx.aliyuncs.com/xxx.png. You can use this link to view or download the image. The link is valid for only 24 hours.
If your business has high security requirements and you cannot access public OSS links, you can configure an access whitelist. Add the following domain names to your whitelist to ensure that you can access the image links.
dashscope-result-bj.oss-cn-beijing.aliyuncs.com
dashscope-result-hz.oss-cn-hangzhou.aliyuncs.com
dashscope-result-sh.oss-cn-shanghai.aliyuncs.com
dashscope-result-wlcb.oss-cn-wulanchabu.aliyuncs.com
dashscope-result-zjk.oss-cn-zhangjiakou.aliyuncs.com
dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com
dashscope-result-hy.oss-cn-heyuan.aliyuncs.com
dashscope-result-cd.oss-cn-chengdu.aliyuncs.com
dashscope-result-gz.oss-cn-guangzhou.aliyuncs.com
dashscope-result-wlcb-acdr-1.oss-cn-wulanchabu-acdr-1.aliyuncs.comError codes
If a call fails, see Error messages for troubleshooting.
FAQ
Q: Does qwen-image-edit support multi-turn conversational editing?
A: No, it does not. The qwen-image-edit model is designed for single-turn execution. Each call is an independent, stateless editing task, and the model does not store your editing history. To perform continuous edits, you can use the output image from a previous edit as the input image for a new request.
Q: What languages do qwen-image and qwen-image-plus support?
A: They officially support Simplified Chinese and English. You can try other languages, but their performance has not been fully verified and is not guaranteed.
Q: If I upload multiple reference images with different aspect ratios, which one determines the aspect ratio of the output image?
A: The output image will match the aspect ratio of the last uploaded reference image.
Q: How to view model usage?
A: Model call data is available after a one-hour delay. You can go to the Model Observation (Singapore or Beijing) page to view metrics such as call usage, number of calls, and success rate. For more information, see How to view model call records.

















