Why portrait mode is hard
What looks like a simple effect is actually a non-trivial computer vision problem. To produce a convincing background blur, you have to know — for every pixel in the image — whether it belongs to the subject or not. The decision has to be sharp at the boundary (so the subject's silhouette stays crisp) and cleanly handle subtleties like hair, glasses, and held objects.
Modern phones do this in hardware, often using two cameras at once to estimate depth, or a dedicated depth sensor. We don't have either of those — your photo arrives flat, with no depth information. So we use the same trick that single-lens cameras use when they offer a software portrait mode: a salient object detection model. The model is U-2-Net, the same network the background remover uses. It produces a mask of the foreground subject, and we use that mask to decide where to blur.
How the blur is applied
Once we have a foreground mask, the rest is straightforward. We compute a Gaussian blur of the original image with a moderate kernel size — wide enough to read clearly as "out of focus," narrow enough not to obliterate the colour and rough shapes of the background. Then we composite: where the mask says foreground, we keep the original sharp pixels; where it says background, we use the blurred pixels; and around the boundary the mask values are between 0 and 1, so we blend the two smoothly.
This produces a more natural-looking transition than a hard cutout, especially around hair and other soft edges. It is the same compositing logic photo editors use for layer masks.
When you should use this instead of a real lens
If you have a fast prime lens (f/1.8 or faster) on a full-frame camera, you should use that. Optical bokeh has properties software cannot fully reproduce: depth-correct progressive defocus, accurate handling of out-of-focus highlights ("bokeh balls"), and natural rendering of partially occluded edges. Software portrait mode is a substitute, not an equivalent.
That said, software is the right tool when:
- You took the photo on a phone or a small-sensor camera
- The original photo is from a friend, archive, or service that lacks the depth data
- You want to add the effect after the fact to an old picture
- You need consistent results across many photos quickly
When the result will look unconvincing
- Multiple subjects at different distances. Real bokeh changes with depth; software bokeh treats everything in the foreground mask as one focal plane.
- Subjects holding or interacting with the background. A hand resting on a fence, a person leaning on a wall — the model often blurs the contact point.
- Thin or transparent edges. Wisps of hair, mesh fabric, glasses lenses — the segmentation mask is hard at the edge, so detail can be lost.
- Already shallow depth of field. If the photo was already taken with bokeh, adding more on top often produces a strange double-blur effect.
- Out-of-focus highlights in the background. Real bokeh transforms point lights into discs; Gaussian blur just smears them. Strings of fairy lights or city lights at night can look noticeably "fake."
Tips for the cleanest results
- Start with a sharp source. The subject must be the sharpest part of the image; otherwise the blur fights with the existing softness.
- One subject is best. Two subjects at different distances will both end up sharp, defeating the depth-of-field illusion.
- Look for clear separation. A subject with empty space around it (a sky, a wall, a path) will mask cleanly. A subject pressed against a busy background may have edge artefacts.
- Avoid backgrounds that mimic the subject's colour. See the "camouflage" warning on the background remover page — same model, same limitation.
- Crop tight before uploading. The 512px downscale takes resolution off the subject; a tight crop gives the segmenter more to work with.
Why this exists alongside the background remover
You may have noticed that this tool and the background remover share the same first stage. They differ only in what they do with the mask. Background remover replaces the background with transparency; portrait mode replaces it with a blurred copy of itself. We expose both because they serve different needs: a transparent cutout for layering and design work, a portrait-mode photo for sharing as-is.