Blog · 2026-05-13 · 8 min read
How to count your astrophotography integration time
If someone asked you, right now, how many hours of integration you have on M31 — could you answer? Most imagers can't, and not because they're disorganised. The question is genuinely harder than it sounds.
This is a piece about that question: why it's hard, the four approaches most imagers reach for first, where each one breaks down at scale, and a metadata-first approach that holds up across a multi-year archive.
Why the question is harder than it sounds
"How many hours have I imaged?" looks like arithmetic. It isn't. To answer it correctly you have to:
- Walk every folder in your archive (some local, some on a NAS, some on a USB drive).
- Identify which files are lights, not flats, darks, or biases.
- Map each light to a target — and your folder names disagree with your FITS headers, which disagree with NINA's project name, which disagree with what you wrote in the journal.
- Multiply the count by the per-frame exposure, which varies between rigs and even within the same session if you swapped to short subs after a satellite trail.
- Bucket by filter, because L:R:G:B:Ha totals matter more than a single grand total.
- Avoid double-counting frames you stacked in two different projects.
That's the easy version. Add a second telescope, a remote site, an SD card you forgot to copy off in 2023, and a brief flirtation with Sequence Generator Pro before you went back to NINA, and the question becomes effectively impossible to answer by hand.
Four approaches imagers reach for first
1. The spreadsheet
You start a Google Sheet. One row per session: date, target, scope, filter, sub length, sub count. It works. For about three months.
Then a clear week happens, you image until 4am for six nights running, the spreadsheet doesn't get updated, and now you're either reconstructing from memory or you give up. The spreadsheet was capturing data your sequencer was already writing to FITS headers — you were doing manual data entry against a system of record that already exists.
2. The acquisition software's session log
NINA, SGP, Voyager, APT — they all log what they captured. The trouble is that those logs cover what this software did on this rig during this session. They don't cover the year you used a different sequencer. They don't cover frames you imported from a friend. They don't tell you anything about the raw subs sitting in /Volumes/Astro-2024/M81/ from a session three years ago that you've forgotten the name of.
Session logs are great for tonight. They are not your archive.
3. Counting files in Finder / Explorer
"There are 412 lights in the M31 folder. They're 300-second subs. So 412 × 300 = 123,600 seconds = 34.3 hours."
Two problems. First: are you sure all 412 are lights? Or is one of them the test exposure you forgot to delete? Second: were they all 300 seconds? You may have started shooting 180s on bad-seeing nights and never standardised. The folder count gives you a number, but not a trustworthy one.
4. PixInsight's StatisticsAfterCalibration / SubframeSelector
If you live inside PixInsight, you can ingest a folder, run SubframeSelector, and read off totals. This is closer to the right idea — you're letting software inspect the files rather than guessing — but it's per-target, per-session, and PixInsight isn't built for "show me totals across my entire archive grouped by month and filter." It's an analysis tool, not a ledger.
The metadata is already there
Every FITS file your sequencer wrote already contains the answer. The header is the first few kilobytes of the file and it carries — at minimum — these fields:
| Header | What it tells you |
|---|---|
OBJECT | The target name as your sequencer wrote it (e.g. "M31", "NGC_7000", "ic1318") |
FILTER | Which filter was in the wheel (L, R, G, B, Ha, OIII, SII, …) |
EXPTIME | Exposure in seconds for this single sub |
DATE-OBS | UTC timestamp of the exposure — the canonical "when" |
IMAGETYP | "LIGHT" / "FLAT" / "DARK" / "BIAS" — so you can exclude calibration |
INSTRUME / TELESCOP | Camera and scope identifiers — so you can attribute hours to a rig |
SITELAT / SITELONG | Where you were imaging from |
This is your ledger. Every choice you made — the filter, the exposure, the rig — is recorded by the camera at capture time, in the file itself. No spreadsheet, no naming convention, no prayer that you remembered to update something afterwards. Every modern sequencer writes these headers. Every cooled astrocam reads them.
The exercise reduces to: walk every .fit / .fits file in the archive, parse the header, group, sum.
Building the ledger by hand (for the curious)
If you only want totals once and you're comfortable in a terminal, you can do this with astropy. The whole thing is about thirty lines:
from astropy.io import fits
from pathlib import Path
from collections import defaultdict
totals = defaultdict(float) # (target, filter) -> seconds
for path in Path("/Volumes/Astro").rglob("*.fit*"):
try:
hdr = fits.getheader(path)
except Exception:
continue
if hdr.get("IMAGETYP", "").upper() not in ("LIGHT", "LIGHT FRAME", ""):
continue
target = (hdr.get("OBJECT") or "unknown").strip()
filt = (hdr.get("FILTER") or "?").strip()
exp = float(hdr.get("EXPTIME") or 0)
totals[(target, filt)] += exp
for (target, filt), secs in sorted(totals.items()):
print(f"{target:20s} {filt:5s} {secs/3600:6.2f} h")
This works. For an archive of 60,000 frames it takes maybe ten minutes on a local SSD. Drop it in a script and you have your numbers.
The reason most imagers don't do this is that the script is the easy 30%. The hard 70% is everything that follows: filter normalization (your sequencer wrote "Ha" but on Tuesday it wrote "H_alpha"), target normalization (M31 vs Andromeda vs NGC 224), per-rig attribution when TELESCOP is empty, handling files that errored, deduplication, then turning the numbers into something you can actually look at — per month, per scope, per filter, drillable.
That's the work that turns a count into a ledger.
What you can do with the ledger
The number on its own is satisfying. But the more interesting questions only become askable once the data is structured:
- Per target, per filter. "I'm at 22h luminance on M81 but only 4h Ha. I should switch."
- Per scope. "I bought the second rig 18 months ago. It has 73 hours on it. Was it worth it?"
- Per location. "Bortle 7 home: 340h. Bortle 4 dark site: 60h. The dark site is producing more usable Ha per hour."
- Per month. "I averaged 38h/month last spring and 12h/month this spring. What changed?"
- Per session. "August 13, 2024 — 6 hours, 4 targets, 3 filters. What was I thinking?"
None of these require a spreadsheet. All of them are derivable from the FITS headers you've been writing for years.
Where Photon Ledger fits in
Photon Ledger is a small desktop app that does exactly this — walks your archive once, parses every FITS header, builds a local database, and gives you the views above. It reads only headers (the first few kilobytes of each file); pixel data is never touched. A 60,000-frame archive on a local SSD indexes in two to five minutes. NAS works too.
Free up to 5,000 indexed frames on SQLite — enough for many beginner archives. Pro is a one-time purchase that removes the cap, adds a PostgreSQL backend for shared/networked storage, and includes CSV export for everything you index. Download for macOS, Windows, or Linux →
If you want to roll your own with the Python snippet above, the only thing you'll really miss is the UI. The hard part isn't reading the headers; it's all the messy classification and grouping and rendering that makes the numbers usable. That's the work this app removes.
The point
You probably have the data to answer "how many hours have I spent on this object" already. The data is the file. It just hasn't been read yet.