Foreground segmentation + Gaussian blur

Add a natural background blur

Simulate the shallow depth of field of a fast prime lens on any photograph, taken on any camera. The model identifies the subject, then applies a Gaussian blur only to the background, leaving the subject sharp.

Why portrait mode is hard

What looks like a simple effect is actually a non-trivial computer vision problem. To produce a convincing background blur, you have to know — for every pixel in the image — whether it belongs to the subject or not. The decision has to be sharp at the boundary (so the subject's silhouette stays crisp) and cleanly handle subtleties like hair, glasses, and held objects.

Modern phones do this in hardware, often using two cameras at once to estimate depth, or a dedicated depth sensor. We don't have either of those — your photo arrives flat, with no depth information. So we use the same trick that single-lens cameras use when they offer a software portrait mode: a salient object detection model. The model is U-2-Net, the same network the background remover uses. It produces a mask of the foreground subject, and we use that mask to decide where to blur.

How the blur is applied

Once we have a foreground mask, the rest is straightforward. We compute a Gaussian blur of the original image with a moderate kernel size — wide enough to read clearly as "out of focus," narrow enough not to obliterate the colour and rough shapes of the background. Then we composite: where the mask says foreground, we keep the original sharp pixels; where it says background, we use the blurred pixels; and around the boundary the mask values are between 0 and 1, so we blend the two smoothly.

This produces a more natural-looking transition than a hard cutout, especially around hair and other soft edges. It is the same compositing logic photo editors use for layer masks.

When you should use this instead of a real lens

If you have a fast prime lens (f/1.8 or faster) on a full-frame camera, you should use that. Optical bokeh has properties software cannot fully reproduce: depth-correct progressive defocus, accurate handling of out-of-focus highlights ("bokeh balls"), and natural rendering of partially occluded edges. Software portrait mode is a substitute, not an equivalent.

That said, software is the right tool when:

When the result will look unconvincing

Tips for the cleanest results

  1. Start with a sharp source. The subject must be the sharpest part of the image; otherwise the blur fights with the existing softness.
  2. One subject is best. Two subjects at different distances will both end up sharp, defeating the depth-of-field illusion.
  3. Look for clear separation. A subject with empty space around it (a sky, a wall, a path) will mask cleanly. A subject pressed against a busy background may have edge artefacts.
  4. Avoid backgrounds that mimic the subject's colour. See the "camouflage" warning on the background remover page — same model, same limitation.
  5. Crop tight before uploading. The 512px downscale takes resolution off the subject; a tight crop gives the segmenter more to work with.

Why this exists alongside the background remover

You may have noticed that this tool and the background remover share the same first stage. They differ only in what they do with the mask. Background remover replaces the background with transparency; portrait mode replaces it with a blurred copy of itself. We expose both because they serve different needs: a transparent cutout for layering and design work, a portrait-mode photo for sharing as-is.