This page contains tests

This is a test with a depth map created with the Depth Map Batch addon. in Blender I created a depth map using the addon and used the resulting image as a depth map in the Photoshop Lens Blur plugin (not the neural plugin, the old fashioned Lens Blur, because the newer Depth Blur neural plugin would not let me use my own depth map - thank you Adobe...)

1Depth Map Batch Addon - large model - 1600 pixels

In t he following example I calculated a depth map based on a 6252 x 5002 pixel image using the highest input size available, which was 1600 px. I used the Depth Map Anything model large (vitl) model . I had to manually soften the large straight diagonal lines of the frame of the training machine.
If you look closely you can still see some halos around the entire training machine, which should be blurry.
Original PhotoBlurred - 1600 pixels - vitl
Original
Blurred

Depth Map Batch Addon - base model - 2000 pixels

In t he following example I calculated a depth map based on a 6252 x 5002 pixel image using a version of the addon which let me choose up to 2400 px as input size.
In this following example I used 2000 pixels as input size, plus I used the Depth Map Anything model base (vitb) model.
To be honest, my 3080 Ti did not go beyond this. 
Original PhotoBlurred - 2000 pixels - vitb
Before
after

How do they compare

Here's both result for comparison.  The advantage here was that the calculated depth map was more contracted, contrasty to start with and I was abele to use this map directly in Photoshop on a lens blur filter, without retouching it manually.
You can clearly see that the result on the right does not have any halos around the training machine, where the background is sharp, instead of blurred, and also the border of the image is blurred better.

DISCLAIMER: I don't know (as of today), if the result was better, because I used the base model, instead of the large model, or if it was because I used the larger input size. I will test that! And also evaluate if I should start with a smaller image and render out 1:1, instead of having the addon resize the input image, then resample it to the original pixels size. Off course the depth maps were manipulated in Photoshop. I had to revert them (can be done in the addon too) and I added a levels filter to narrow down the blurred area.

Last but not least Note from the gut: Something tells me that the law of working 1:1, when creating images for the web, is probably still valid, despite megapixel screens on tiny devices...
Blurred - 1600 pixels - vitlBlurred - 2000 pixels - vitb
1600 px / vitl
2000 PX / vitb
And because we love those compare sliders, here's the used depth maps in comparison. Yes, the settings on the levels is different on either. Those are NOT the depth maps rendered by the addon. Look below :) (yet another slider dudelduh).
Blurred - 1600 pixels - vitl - depthBlurred - 2000 pixels - vitb - depth
1600 px / vitl
2000 PX / vitb
And here the plain output of the Batch Depth Addon. The depth map on the right (2000 px / bas model) , has smaller steps (pixelisation) on the large beams and has a bit more contrast and is a bit more contracted on the large beams of the training machine.  The advantage was that I did not have to retouch the depth map at all. In the rendering 
Blurred - 1600 pixels - vitl - depth PLAINBlurred - 2000 pixels - vitb - depth PLAIN
1600 px / vitl
2000 PX / vitb
Marigold - straight render - 1920 - 1:1Marigold - leveled retouched - 1920 - 1:1
Marigold - Rendered
Marigold - Leveled & Retouched
arrows-h