How can I generate consistent characters across multiple images using the OpenAI Python API for DALL-E 3? I’ve successfully achieved this by directly prompting the model on the OpenAI website (e.g., by referencing a seed number), but I’m uncertain about replicating the same behavior through the API. Any guidance or examples would be appreciated.