File Uploads with Sveltekit and Cloudflare
published
I am currently building magiedit, a markdown editor that also allows publishing to different platforms (like Hashnode and Dev.to) and decided to add some shortcuts/commands to make my life simpler (since I am building this tool primarily for myself xD), including adding images from unsplash and gifs and giphy because
For one of the features I’m working on that will allow users to use dall-e to generate images (and cover images based on their article’s content as soon I can get that to working) I needed to store the selected images (since openais urls only last for 1 hour and storing 4mb of base64 data in an article makes editing it awful. For this, I turned to cloudflare’s r2 storage, because I already use their solution for my blog’s images and my dns management (plus, their free tier is pretty generous).
Using Cloudflare R2 Storage
Cloudflare’s R2 Storage is an S3 (the one from Amazon, exactly) compatible storage solution, meaning you can interact with it in “almost” the say way you would with an S3 bucket (there are some methods that are not supported and that you can find here.
So, for my use case, I created a bucket and paired it with a domain name and then generated an access key to that specific bucket (so that I could upload things from my sveltekit api function). Again, the link above should provide all instructions on how to get these informations and what they all mean. Now that I had a bucket, let’s get to the heart of this article, which is how to interact with a bucket using sveltekit (try saying that sentence out of context).
Sveltekit integration
Now for the fun part; using all this in sveltekit. Let me preface this section by saying that I had specific needs in this application that required something of an unorthodox architecture. Since the application is storing user articles, I wanted those to be offline first, as well as other potential informations required for using the application (like tokens for publishing and stuff); this means that I had to create api routes instead of handling all of this in form actions. Having said all this, let’s see how it all works.
Getting the images
For my specific use case, I needed to store images generated by dall-e (openai), so that meant calling that service; said service returns either a base64 string containing the image or an url (that expires after 1 hour). Passing around a 4mb string didn’t sound like a good idea (trust me, I tried), so I was left with the url.
This is the (hopefully) final code I came up with:
export const POST: RequestHandler = async ({ locals, request }) => {
const session = await locals.auth.validate();
if (!session) {
throw error(401, { message: 'not authorized' });
}
const formData = await request.formData();
const { content, description } = Object.fromEntries(formData);
if (!content) {
throw error(500, { message: 'invalid input' });
}
const arrayBuffer = await (await fetch(content.toString())).arrayBuffer();
const data = Buffer.from(new Uint8Array(arrayBuffer));
const key = `${session?.user?.userId}_${uuidv4()}`;
const url = await saveToBucket(data, key);
// get data from function
await db.insert(userImages).values({
url,
userId: session?.user.userId,
description: description?.toString()
});
return json({ message: 'ok', url });
};
These two lines are what took me a long time to figure out because I kept running into heap problems (my program was using too much memory on this single call, which is not normal)
const arrayBuffer = await (await fetch(content.toString())).arrayBuffer();
const data = Buffer.from(new Uint8Array(arrayBuffer));
This allows me to download the image from the url openai provides, store it as a buffer and then do whatever I want with it (in this case, store it in a bucket). Then I generate a key using the user’s id a random uuid (I will always have access to the user, because this is one of the few commands requiring authentication) which will become the file name, and then call the saveToBucket
function.
Saving the images (or any other file, really)
You may have noticed I had a call to a functionn saveToBucket
in the previous code sample; well, as the name suggests, this is where the buffer we got earlier gets saved into the bucket and we get a url back:
export async function saveToBucket(data: Buffer, key: string) {
const S3 = new S3Client({
region: 'auto',
endpoint: `https://${CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: CLOUDFLARE_ACCESS_KEY_ID,
secretAccessKey: CLOUDFLARE_SECRET_ACCESS_KEY
}
});
await S3.send(
new PutObjectCommand({
ACL: 'public-read',
Key: key,
Body: data,
Bucket: CLOUDFLARE_BUCKET_NAME
})
);
return `${CLOUDFLARE_BUCKET_URL}/${key}`;
}
Again, most of this stuff comes from the cloudflare documentation I linked above, like how to setup the S3Client. This function basically puts together a bunch of environment variables and call a single function with the command needed to upload a file to a specific bucket. And just like that, we have image upload !
What’s left after uploading
The one thing I wanted was for users to be able to retrieve all the images they had generated (and be able to download them at some point), so I simply stored the url for the new image with the user’s in my database, and retrieve all those belonging to a specific user id when necessary.