A small, independent project

Free image tools that explain themselves.

Image-Studio.net is a hand-curated collection of browser-based image processing tools backed by open-source computer vision models. Every tool runs on a server we maintain ourselves, has no sign-up, no watermark, and no usage cap. Every tool is also paired with a plain-language guide that explains how the underlying model works, what it is good at, and where it falls down.

We built it because most "AI image editor" sites are wrappers over the same handful of public models, monetised aggressively, with no explanation of what is actually happening to your image. We wanted something honest: working tools, an honest description of the technology, no dark patterns.

The tools

7 tools, all free, no sign-up

How a typical session works

01

Pick a tool

Each tool has its own page describing what model it uses, what the model is good at, and what kind of input gives the best output. Read the description first, especially for the older models like the colorizer.

02

Upload an image

Drag a PNG, JPG, or WEBP onto the upload area. The image is sent over HTTPS to our processing server, downscaled to 512px wide for speed, fed through the model, and returned to your browser as a PNG.

03

Download and move on

The result is a regular PNG you can save anywhere. Your upload is purged from our processing directory the moment your download completes. We never log image bytes, never sample for training, never share with anyone.

Long-form guides

All guides →

If you have ever wondered how a model like GFPGAN actually reconstructs a face it has never seen, or why background removal struggles with hair, the guides go through it in plain language. They're not press releases — they explain the architectures and admit where the models fail.

Frequently asked questions

Is Image-Studio.net actually free?

Yes, with no asterisks. No account, no email, no credit card, no usage limits, no watermark on the output. The cost of running the server is covered by Google AdSense advertising on these pages. If you find the tools useful, leaving an ad-blocker exception for the domain is the only thing we'd ask.

What happens to my images?

Your image is uploaded over HTTPS, written to a temporary working directory on the processing server, fed through the relevant model, and returned to your browser. The temporary copy is deleted as soon as the response is sent. We do not retain backups, do not run analytics on the pixel data, and do not use uploads for any kind of model training. The privacy policy has the long version.

What file formats can I upload?

PNG, JPG, and WEBP. Results are always returned as PNG to preserve transparency where applicable (the background remover specifically depends on this). Maximum recommended file size is around 5 MB; larger files work but spend more time on the upload than on the actual processing.

Why is the result smaller than my original?

For speed and to keep the service free, every input is downscaled to 512 pixels on the long edge before being sent to the model. The face restorer and photo healer then upscale internally back to a higher resolution. If you need the output at the original resolution, you can upscale the result locally with a separate tool — but for most uses (web, social, print at small sizes) 512 is fine.

Can I use the results commercially?

Yes. You retain full ownership of anything you upload and anything you download. We claim no rights over the output. Bear in mind that you are responsible for the rights to the original image — if you didn't take the photo or have permission to use it, that doesn't change just because a model touched it.

Which models do you use?

All open-source. Background removal uses U-2-Net. Colorization uses the Colorful Image Colorization model from the 2016 Zhang et al. paper. Face restoration uses GFPGAN from Tencent ARC. Anime conversion uses AnimeGAN v2. Object detection uses MobileNet V3 Faster R-CNN. Photo healing combines Real-ESRGAN with morphological inpainting. Portrait mode uses U-2-Net for segmentation followed by a Gaussian blur applied to the inverted mask.

How long does processing take?

On an unloaded server, most operations finish in 2–5 seconds end-to-end (upload + processing + download). The photo healer takes longer — typically 8–15 seconds — because it runs three models in sequence. If the queue is busy you may wait up to 30 seconds for a slot.

Why does the output sometimes look weird?

Every model has failure modes. The colorizer can desaturate skin tones in unusual lighting. GFPGAN can over-smooth wrinkles in older faces (it was trained mostly on younger subjects). The anime converter sometimes gives realistic eyes unfortunate proportions. The guides explain each model's specific weaknesses in detail. If a result looks off, retrying with a slightly different crop often helps.

Do you store anything in my browser?

We do not set any first-party cookies. Google AdSense, used to display advertising, may set its own cookies according to Google's policy. There is no user analytics platform attached to the site. The temporary blob URL created in your browser to display the result is just an in-memory reference to the image you downloaded; closing the tab cleans it up.

Who runs this site?

A single developer working on computer vision, hosting on personal infrastructure. There is no company, no investor, no team. The About page has more, including a contact address.

A note on responsible use

These tools are powerful enough to alter photographs in ways that could mislead. The face restorer in particular generates plausible facial detail that did not exist in the input — it does not recover truth, it invents something consistent with what the model has seen. Treat the output of any generative model as an interpretation, not evidence.

For your own family photos, vacation snaps, profile pictures, and creative projects, none of this is a problem. For journalism, legal documentation, or anything where authenticity matters, do not use AI restoration without clearly disclosing it.