What file format should I use with Altair Cameras?

This blog outlines the differences between the various still and video file formats available with your Altair camera when using AltairCapture or SharpCap.

Still file formats (1 image frame per file)

The .PNG file format:

.PNG Pros: Can be loaded into almost any graphics application. Handles 8 to 16 bit bit depths and mono or colour images.

.PNG Cons: Many imaging applications may discard detail from 16 bit PNG files when loading them. RAW images saved in PNG will appear monochrome with a checkerboard pattern and may need additional manual settings in post-processing to ensure correct debayering. SharpCap can only re-load 8 bits of data from PNG files, even when loading 16 bit saved files.

The .FITS file format:

.FITS Pros: Supports 8 bit and greater bit depths. Supports mono, colour and raw images. Image data such as exposure is stored in the file and some applications will read this data. SharpCap can load 16 bits of data from FITS files.

.FITS Cons: Can only be opened by a limited number of applications. Some applications require additional plugins to open this file type. File format is very complex and flexible, so files may display incorrectly in some applications and correctly in others.

Video file formats – used for solar system imaging, stacking and so-on.

The .AVI file format:

.AVI Pros: Can be viewed in almost any video playback software. Can be uploaded to YouTube, or even better quality, Vimeo.

.AVI Cons: File format is complex and has many sub formats. Correct playback may depend on other software and codecs installed on the machine. Playback and processing errors may be subtle and difficult to solve. 8 bit only. Mono and RAW saved in AVI may appear upside down due to limitations of the file format (but hey this is astronomy, right?). May not stack as well as .SER format due to compression artefacts. (Speaking of which, Altair Cameras do not compress .AVI video, so they are better quality than say, a re-purposed webcam, DSLR or camcorder which compress video reducing the quality).

The .SER file format:

.SER Pros: A simple file format with few variations – applications tend to work correctly with it or not at all. SER file is written with the Bayer pattern of the camera which simplifies post-processing for RAW captures. Supports bit depths of 8 bits per pixel and also up to 16bpp. (More than .AVI!). Each frame in the file is timestamped exactly. Supports Mono, RAW and RGB captures.

. SER Cons: Less post-processing applications support SER format but the ones which are most used, such as AutoStakkert AS2 (for stacking), Registax 5 and 6 (for stacking and wavelet sharpening), and PIPP (for preparing the video files for processing in the above), all accept .SER format. Interpretation of the .SER standard is somewhat different so sometimes you need to help the program to select the correct colour space if it doesn’t auto-recognise it, but it’s usually no big deal.

What colour space should I use with Altair Colour Cameras?

When using a colour Altair camera in SharpCap or Altair Capture, you will have some colour space options to choose from. This blog outlines the various choices of colour space and their pros and cons.

You select the colour space mode like this:

1) in AltairCapture under the Selected in AltairCapture under the Capture & Resolution > “Format” menu. Select bit-depth separately under the “Bit depth” menu. E.g. Format “RAW” & Bit a depth of 8 bit gives you RAW8 colour space.

2) In SharpCap you select from the list under the Capture Format Area menu > “Colour space” > RAW8.

In both capture programs, the options will not appear for all file formats because some don’t support some spaces. (See other blog What file format should I choose for with Altair Cameras?).

1) The RGB colour space: 

RGB Pros: Simple to use and simple post-processing. Images should look correct when viewed in any application. Camera based adjustments like white balance, gamma, brightness and contrast are available in the capture software, although these are usually performed in software on the PC. This is best for EAA or Video Astronomy.

RGB Cons: Files are large as they are typically 3 bytes per pixel. Bit depth limited to 8 bits. Debayering (turning the raw image to full colour) is performed by the camera driver typically using a simple but fast algorithm. Adjustments like gamma, brightness and contrast lead to data loss when they are performed as they happen in digital space.

2) The RAW8 colour space:

RAW8 Pros: Exact data that comes off of the camera sensor with no post-processing. Post-processing (including debayering) can be done later at a higher quality. File size is small (only 1 byte per pixel).

RAW8 Cons: Smaller range of post-processing applications that can work with the output files. Post-processing is more complex. Output files may appear to have ‘chessboard’ effect if opened in applications that don’t understand raw formats. Bit depth limited to 8 bits.

3) RAW12 colour space:

RAW12 Pros: Exact data that comes off of the camera sensor with no post-processing. Post-processing (including debayering) can be done later at a higher quality. Higher bit depth may give more information and more dynamic range if images are low noise with short exposures, like in solar/lunar imaging. (See other blogs on bit depth for more info).

RAW12 Cons: Smaller range of applications that can work with the output files. Post-processing is more complex. Output files may appear to have ‘chessboard’ effect if opened in applications that don’t understand raw formats. Files are larger (2 bytes per pixel) hence slower frame rates due to slower write speeds to PC.

4) MONO colour space:

MONO Pros: smaller file size (1 byte per pixel), ideal for monochrome targets (using RGB filters on planets, narrowband filters on deepsky objects, Hydrogen Alpha solar imaging, lunar imaging, etc) when using a colour camera.

Mono Cons: Processing to produce mono involves a debayer to produce a colour image and then that is made monochrome, so the following cons for RGB apply : Debayering (turning the raw image to full colour) is performed by the camera driver typically using a simple but fast algorithm. Adjustments like gamma, brightness and contrast lead to data loss when they are performed as they happen in the “digital” space. Therefore, it may be better to capture as RAW8/12 and then make the final processed image monochrome.

 

(Special thanks to Robin Glover of SharpCap!)

My Altair camera has 8bit and 12bit output. Which would I choose?

Q: My Altair camera has 8bit and 12bit output. Which would I choose and when?

A: First we need to explain what bit depth is when it comes to imaging. Your computer uses ones and zeros to represent information – a bit. In the case of an image with a bit depth of one it would be one digit a 1 or 0, which means the image would be black and white only, like this:

moon mono 1bit

But if we use more bits, say two, we have four possible combinations, like this: 00 01 10 11. That means we can display four levels of grey. Black, dark grey, light grey and white. Here’s the same image in 2-bit mode, with four levels of grey:

moon mono 4 levels

The more levels of grey you add the more tonal range you get. Here’s an 8 bit image with 256 levels of grey:

moon mono

So let’s assume you’re doing video stacking or “lucky” imaging with your Altair camera. You’d think you should just choose the highest bit level – say 12 bit output. Well 2 to the 12th power is 4096, which is a lot of grey levels. Thing is, in 12bit mode, you’ve still got to get those huge files onto your PC hard drive which means the frame rate of your camera will likely drop and the size of your video files will be huge. So is it worth the hassle for all that extra tonal range?

If you are stacking many frames say more than 50 frames, then going from 8 to 12 bit mode will not give an improvement if the pixel noise is typically bigger than one 8 bit level.

For example, if a value you’d get without noise is 20.5, then with noise of about 1 level you might see 50% of the frames have a 20 readout for that pixel and 50% have 21 – so the average is about 20.5.

Suppose a neighbouring pixel has a true (recorded) value of 20.9, then all it takes is a little noise in most of the frames for that pixel to effectively have a value of 21, a few frames with a value of 20, and maybe the odd value of 22. The average will be about 20.9 though. So using 12 bit mode has no benefit, because the extra sub levels it allows between each 8 bit level are much less than the noise.

Planetary imaging: Planets are quite dim especially at high power, so typically in planetary imaging you run with high gain – think of gain as “ISO”, You’ll want to use high gain to keep exposure duration short to freeze the distortion caused by the air. Using high gain increases noise though, so the noise is going to be much more than 1, so 8 bit is quite good enough.

Lunar imaging or white light solar imaging might be a bit different – you could probably run at low gain if the object is quite bright, in which case the noise level might be less than one pixel level, so maybe 12 bit will help you there.

For long exposures of deep-sky objects, noise builds up quite a lot, so you’d typically run at minimum gain to get rid of that noise. So in this low gain situation, 12 bit should help improve the image, and it should also increase the dynamic range between the dimmest and brightest features making the object appear more detailed to the eye.

Video astronomy (live stacking in SharpCap for example) seems to work best at fairly high gains, so 8 bit might be sufficient, but you can start to see more tonal range in 12 bit mode and seeing you aren’t going for high frame rates or recording vast amounts of data it can’t hurt.

Some more advanced theory, if you’re interested:

Now all this is complicated by the three sources of noise which all camera sensors have, regardless of what type, what design, and what make:

1) Readout noise: A function of the sensor design.
2) Shot noise: A fundamental variation in the number of photons arriving on a pixel. It’s proportional to the square root of the number of photons received at a pixel during the exposure.
3) Thermal noise: A function of sensor design and temperature. Thermal noise usually increases in proportion to the exposure time. For short exposures used in solar system imaging (fractions of a second) it can largely be ignored. (for a description of thermal noise and FPN or Fixed Pattern Noise, see our other blog on the dark frame correction features in AltairCapture).

If you think of a sensor with full well capacity of 10000 electrons, a pixel that is nearly full will have a shot noise of about 100 electrons – 1% of it’s full range, and larger than 1/256 levels of grey, so more than one 8 bit level.

A pixel that only collects 10e will have a shot noise of about 3e.

If readout and thermal noise are also small, than that pixel might have a total noise less than 1/256 of full range, so 12 bit would be an advantage for those nearly dark pixels.

The choice of which bit depth to use is up to you but this should help you make a more informed choice.

(With thanks to Robin Glover for his input).