App for Creating DALL-E Prompts with ChatGPT ②

App for Creating DALL-E Prompts with ChatGPT ②

Introduction

This article is the 13th day of the #AWSAmplifyJP Advent Calendar 2023. To participate in the Kuso App Advent Calendar 2023, I made prompts for DALL-E (image generation) with ChatGPT. Also, the app functions are introduced in the article below.

https://qiita.com/tanosugi/private/927add61c89565cd5448

Demo

Removed

Technologies Used

In this article, I want to introduce the technologies used.

Amplify

I used many Amplify features, so I’ll introduce them along with code.

Figma to Code

First, I made LPs and web app components in Figma, and got React code with the Figma to Code feature. Depending on the app type, it saves from 1/several to half the coding compared to writing code from scratch. Recently, things like Vercel’s v0 that have AI write code have appeared, but personally, I prefer having GUI-drawn things become code rather than instructing with prompts.

Screenshot 2023-12-10 172216.png

Example of automatically generated React code

/*************************************************************************** * The contents of this file were generated with Amplify Studio. * * Please refrain from making any modifications to this file. * * Any changes to this file will be overwritten when running amplify pull. * **************************************************************************/ /* eslint-disable */ import * as React from "react"; import { getOverrideProps } from "./utils"; import { Button, Flex, Image, Text } from "@aws-amplify/ui-react"; export default function HeroView(props) { const { overrides, ...rest } = props; return ( <Flex gap="0" direction="row" width="1490px" height="unset" justifyContent="center" alignItems="center" position="relative" padding="0px 0px 0px 0px" {...getOverrideProps(overrides, "HeroView")} {...rest} > <Flex gap="10px" direction="column" width="720px" height="392px" justifyContent="space-between" alignItems="center" shrink="0" position="relative" padding="0px 0px 0px 0px" {...getOverrideProps(overrides, "Frame 3")} > <Flex gap="24px" direction="column" width="316px" height="unset" justifyContent="center" alignItems="center" grow="1" shrink="1" basis="0" position="relative" padding="0px 0px 0px 0px" {...getOverrideProps(overrides, "HeroMessage")} > <Flex gap="16px" direction="column" width="unset" height="unset" justifyContent="center" alignItems="center" shrink="0" alignSelf="stretch" position="relative" padding="0px 0px 0px 0px" {...getOverrideProps(overrides, "Message")} > <Text fontFamily="Inter" fontSize="16px" fontWeight="700" color="rgba(64,170,191,1)" lineHeight="24px" textAlign="center" display="block" direction="column" justifyContent="unset" width="unset" height="unset" gap="unset" alignItems="unset" shrink="0" alignSelf="stretch" position="relative" padding="0px 0px 0px 0px" whiteSpace="pre-wrap" children="Easy Image" {...getOverrideProps(overrides, "Eyebrow")} ></Text> <Text fontFamily="Inter" fontSize="24px" fontWeight="600" color="rgba(13,26,38,1)" lineHeight="30px" textAlign="center" display="block" direction="column" justifyContent="unset" width="unset" height="unset" gap="unset" alignItems="unset" shrink="0" alignSelf="stretch" position="relative" padding="0px 0px 0px 0px" whiteSpace="pre-wrap" children="画像を簡単に生成" {...getOverrideProps(overrides, "Heading")} ></Text> <Text fontFamily="Inter" fontSize="16px" fontWeight="400" color="rgba(48,64,80,1)" lineHeight="24px" textAlign="center" display="block" direction="column" justifyContent="unset" letterSpacing="0.01px" width="unset" height="unset" gap="unset" alignItems="unset" shrink="0" alignSelf="stretch" position="relative" padding="0px 0px 0px 0px" whiteSpace="pre-wrap" children="生成したい画像について簡単に伝えるだけで、自動で画像生成用のプロンプトを作り、実行できます。" {...getOverrideProps(overrides, "Body")} ></Text> </Flex> <Button width="unset" height="unset" shrink="0" size="large" isDisabled={false} variation="primary" children="試してみる" {...getOverrideProps(overrides, "Button")} ></Button> </Flex> </Flex> <Image width="720px" height="392px" display="block" gap="unset" alignItems="unset" justifyContent="unset" shrink="0" position="relative" padding="0px 0px 0px 0px" objectFit="cover" {...getOverrideProps(overrides, "1700652527621-cat 1")} ></Image> </Flex> ); }

Amplify Form Builder, react-modal

I made the input form with Form Builder. I used react-modal so that when one form is input and submitted, the next modal opens.

Screenshot 2023-11-23 174823.png

Code

"use client"; import Modal from "react-modal"; import { CreatedImage } from "@/models"; import { createFromDalleAndSaveToS3, getSignedS3UrlFromKey, } from "@/utils/createSaveImage"; import { Button, Flex, Loader } from "@aws-amplify/ui-react"; import { DataStore } from "aws-amplify/datastore"; import React, { ChangeEvent, useState, } from "react"; import PromptCreateForm from "@/ui-components/PromptCreateForm"; import PromptEditForm from "@/ui-components/PromptEditForm"; import createPrompt from "@/utils/chat"; import ImageCreateSuccessView from "@/ui-components/ImageCreateSuccessView"; export default function CreateSaveImageModal({ openApiKey, }: { openApiKey: string; }) { const [modalToOpen, setModalToOpen] = useState(""); const [generationTarget, setGenerationTarget] = useState(""); const [adjective, setAdjective] = useState(""); const [languageOfPrompt, setLanguageOfPrompt] = useState(""); const [numberOfWordsPrompt, setNumberOfWordsPrompt] = useState(""); const [prompt, setPrompt] = useState(""); const [imageUrl, setImageUrl] = useState(""); const [loadingDalle, setLoadingDalle] = useState(false); const createImage = async (prompt: string) => { setLoadingDalle(true); const key = await createFromDalleAndSaveToS3(openApiKey, prompt); const url = await getSignedS3UrlFromKey(key); setImageUrl(url); DataStore.save(new CreatedImage({ title: prompt, s3Url: key })); setLoadingDalle(false); }; const onSubmitPromptCreateForm = async () => { setModalToOpen("CreatingPrompt"); const resp = await createPrompt({ openApiKey: openApiKey, generationTarget: generationTarget, adjective: adjective, languageOfPrompt: languageOfPrompt, numberOfWordsPrompt: numberOfWordsPrompt, }); setPrompt(resp); setModalToOpen("PromptEditForm"); }; const onSubmitPromptEditForm = async () => { setModalToOpen("CreatingImage"); await createImage(prompt); setModalToOpen("ImageCreateSuccessView"); }; return ( <> {loadingDalle ? ( "Creating Image..." ) : ( <Button variation="primary" onClick={() => setModalToOpen("PromptCreateForm")} > プロンプトを作って画像を生成する </Button> )} <Modal isOpen={modalToOpen == "PromptCreateForm"}> <PromptCreateForm onSubmit={onSubmitPromptCreateForm} overrides={{ generationTarget: { value: generationTarget, onChange: (event: ChangeEvent<HTMLInputElement>) => setGenerationTarget(event.target.value), }, adjective: { value: adjective, onChange: (event: ChangeEvent<HTMLInputElement>) => setAdjective(event.target.value), }, languageOfPrompt: { value: languageOfPrompt, onChange: (event: any) => { setLanguageOfPrompt(event.target.value); // console.log("languageOfPrompt:", languageOfPrompt); }, }, numberOfWordsPrompt: { value: numberOfWordsPrompt, onChange: (event: any) => { setNumberOfWordsPrompt(event.target.value); console.log("numberOfWordsPrompt:", numberOfWordsPrompt); }, }, }} /> </Modal> <Modal isOpen={modalToOpen == "CreatingPrompt"}> <Flex height="100%" direction="column" alignItems="center" justifyContent={"center"} > {"プロンプトを生成しています。。。"} <Loader variation="linear" /> </Flex> </Modal> <Modal isOpen={modalToOpen == "PromptEditForm"}> <PromptEditForm onSubmit={onSubmitPromptEditForm} overrides={{ Field0: { defaultValue: prompt, value: prompt, onChange: (event: any) => { let { value } = event.target; setPrompt(value); }, }, }} /> </Modal> <Modal isOpen={modalToOpen == "CreatingImage"}> <Flex height="100%" direction="column" alignItems="center" justifyContent={"center"} > {"画像を生成しています。。。"} <Loader variation="linear" /> </Flex> </Modal> <Modal isOpen={modalToOpen == "ImageCreateSuccessView"}> {imageUrl != "" && ( <ImageCreateSuccessView overrides={{ "1700652527621-cat 1": { src: imageUrl }, Button: { onClick: () => { setModalToOpen(""); setImageUrl(""); }, }, prompt: { children: prompt }, }} /> )} </Modal> </> ); }

Datastore, Figma to Code’s Collection

I saved the generated prompts and images in Datastore, and displayed them using components made with Figma to Code. I used Collection, a feature that displays multiple components made in Figma.

Code

"use client"; import { CreatedImage } from "@/models"; import ImageCardViewCollection from "@/ui-components/ImageCardViewCollection"; import { getSignedS3UrlFromKey } from "@/utils/createSaveImage"; import { DataStore, Predicates, SortDirection } from "aws-amplify/datastore"; import { useEffect, useState } from "react"; export default function App() { const [url, setUrl] = useState(""); const [createdImages, setCreatedImage] = useState<CreatedImage[]>([]); const fetchCreatedImage = async () => { const resp = await DataStore.query(CreatedImage, Predicates.ALL, { sort: (s) => s.createdAt(SortDirection.DESCENDING), }); setCreatedImage([]); const imgs: CreatedImage[] = []; resp.forEach(async (item) => { const urlResp = await getSignedS3UrlFromKey(item.s3Url); await setUrl(urlResp); const img: CreatedImage = new CreatedImage({ title: item.title, s3Url: urlResp, }); imgs.push(img); }); setCreatedImage(imgs); }; useEffect(() => { fetchCreatedImage(); const subscription = DataStore.observe(CreatedImage).subscribe(fetchCreatedImage); return () => { subscription.unsubscribe(); }; // eslint-disable-next-line react-hooks/exhaustive-deps }, []); return ( <> <ImageCardViewCollection items={createdImages} overrideItems={({ item, index }) => ({ overrides: { ImageCardView: { width: "100%" } }, })} /> </> ); }

S3

I stored the generated images in S3 via Amplify, and got SignedURL when displaying the list. Code

export async function uploadBlobToS3(blob: Blob) { const key = `${Date.now()}-cat.png`; try { const result = await uploadData({ key: key, data: blob, options: { accessLevel: "guest", contentType: "image/png" }, }).result; console.log("Succeeded: ", result); } catch (error) { console.log("Error : ", error); } return key; } export async function getSignedS3UrlFromKey(key: string) { const getUrlResult = await getUrl({ key: key, options: { validateObjectExistence: false, expiresIn: 20 }, }); return getUrlResult.url.toString(); }

Authentication (Password, Google Login)

In Amplify, authentication can be introduced easily, same as Firebase, etc. For Google login, since API Keys are referenced mutually between GCP, some GUI settings are needed.

Code

return ( <Authenticator> {children} </Authenticator> );

Nextjs 14

When using Nextjs 14 with Amplify Hosting, just use Amazon Linux 2023 as the build image. It was supported after Nextjs 14 was released for a while. https://github.com/aws-amplify/amplify-hosting/issues/3773

Langchain, ChatGPT API

From here, these are technologies used this time separate from Amplify. First, I send a prompt to ChatGPT to have ChatGPT make a prompt for image generation, but I used Langchain’s LCEL (LangChain Expression Language) to embed the words and numbers input by the user into the prompt. Code

"use server"; import { PromptTemplate } from "langchain/prompts"; import { ChatOpenAI } from "langchain/chat_models/openai"; export default async function createPrompt({ openApiKey, generationTarget, adjective, languageOfPrompt, numberOfWordsPrompt, }: { openApiKey: string; generationTarget: string; adjective: string; languageOfPrompt: string; numberOfWordsPrompt: string; }) { const model = new ChatOpenAI({ openAIApiKey: openApiKey }); console.log("languageOfPrompt:", languageOfPrompt); const promptTemplate = PromptTemplate.fromTemplate( `下記条件でDalleに出力させるためのプロンプトを考えてください。箇条書きではなく文章で 生成したいもの: {generationTarget} どんな画像にしたいか: {adjective} 制約1:プロンプトのみを返してください。プロンプトの説明は不要です。 制約2:プロンプトは一つだけ作成してください。 制約3:プロンプトは出力が美しくなるように{numberOfWordsPrompt}単語前後で詳細に作成してください。 プロンプトの言語:{languageOfPrompt} ` ); const chain = promptTemplate.pipe(model); console.log(chain); const result = await chain.invoke({ generationTarget: generationTarget, adjective: adjective, languageOfPrompt: languageOfPrompt, numberOfWordsPrompt: numberOfWordsPrompt && numberOfWordsPrompt != "" ? numberOfWordsPrompt : "20", }); console.log(result); return String(result.content); }

DALL-E API

For DALL-E API, I specified b64_json for response_format and got the image.

Code

"use server"; import OpenAI from "openai"; export async function createDalleImageAsBase64({ openApiKey, prompt, }: { openApiKey: string; prompt: string; }) { const openai = new OpenAI({ apiKey: openApiKey, }); const image = await openai.images.generate({ model: "dall-e-2", prompt: prompt, response_format: "b64_json", size: "256x256", }); const b64_json: string = image.data[0].b64_json ? image.data[0].b64_json : ""; return b64_json; }

b64-to-blob

I converted the obtained image to blob before storing in S3. Code

export async function createFromDalleAndSaveToS3( openApiKey: string, prompt: string ) { const b64_json = await createDalleImageAsBase64({ openApiKey: openApiKey, prompt: prompt, }); const blob = b64toblob(b64_json); const key = await uploadBlobToS3(blob); return key; }

Summary

Using Amplify or ChatGPT API allows you to make apps quickly, so I recommend it not only for prototyping but also for hackathons.

CC BY-NC 4.0 Contact Us Privacy Policy2025 © tanosugi.RSS
App for Creating DALL-E Prompts with ChatGPT ② | Falcon Apps Tech Blog