Automating the Raw Workflow - design concepts

Discussion of Automation, Image Workflow and Raw Image Workflow

Moderators: Tom, Kukurykus

Andrew

Automating the Raw Workflow - design concepts

Post by Andrew »

As I develop my scripts and use them more my ideas on underlying conceptual structure get sorted out. Here are some thoughts on key principles for the automated workflow:

1. The base for archiving and image delivery is the original RAW files, and it's associated (potentially multiple) xmp files - NOT various saved versions of multi-layer processed files.

I aim to get 90% of my color correction done in ACR. I keep my raw files and 1, 2 or 3 xmp files (the precursors to a multi layer processed Tiff) and most of the time that is all.

2. I now have scripts that will take the raw file and the multiple xmp files and will process these 90% of the way to various finished products eg optimised tiffs for input to a print run, optimised large jpg's, optimised web jpg's complete with thumbnails etc.

These scripts generally require small amounts of user input at key points but they are very efficient and fast. This is why it is not necessary to save multi-layer processed files eg tiffs. The archiving requirement becomes 10Mb / image rather than 100 - 200 Mb / image. For the vast majority of users, including (especially?) professionals, this makes a lot of sense and I believe frees you up from the financial, physical and psychological constraints of needing another 1 Gb of storage for every 5-10 images you add to the archive.

Yes you might decide to keep certain processed 'ready for print' images during the period when you are actively printing them (or waiting for a customer to decide their needs) but you do not need to keep processed versions of everything you do because it is so fast using automation to recreate the product - and you will only get better at doing that recreation over time.

3. I have opted for a two level system for retrieving / finding images.

First I name them by the date and time at which they were created. Then I add keywords to the xmp data and search for raw files by keyword (the file browser has a good keyword search function as do some of my own scripts). I have created a keyword insertion script that writes straight to the xmp files. It is virtually instantaneous for any number of files and does not cause PS to regenerate the preview data in the File Browser.

4. I always use xmp sidecars for my RAW files - the ability to edit these files independently of PS saves a huge amount of time and allows you to do many things that otherwise are impossible or very difficult (eg multi-layer raw processing).

5. It is possible that I will sometimes add further archival image data to keep with my raw files and xmp's.

So far this has not been necessary, but perhaps at some stage it will become so.

For example I may want to keep 'sensor dust maps' for a given set of images. Or I might want to keep a particularly tricky combination of image color adjustment layers (say a levels layer a curves layer and a hue layer) for an image.

The approach I take with additional image data like this will also be optimised for speed, automation and efficient archiving.

For example a tiff file which consists of several adjustment layers and NOTHING else need be only 50 - 100 KB regardless of the underlying image size. This adjustment tiff can be identified by suffix on the file name and perhaps even by notes added to the RAW xmp data. It can then be recombined with the RAW file when needed with automation.

Other image data may need other techniques - for example a dust map could be saved as a greyscale 8 bit JPG.

The point about automation is that as long as you develop a workflow that is designed to be automated, image and data handling becomes costless. Scripts do not find it hard to look in multiple file types to decide what to do.

Andrew