Converting the Data
The format used for the database is JSON (a lightweight data-interchange format derived from JavaScript object notation). We recommend starting with an existing file from the database that suits your needs and saving it under the name of your sensor or filter. You can then simply replace the values in the file with the data you have obtained.
In the
wavelength
array, enter your wavelength measurements. Make sure to properly set theunits
field to one of the following values:angstroms
,nm
,micrometres
, orm
.In the
values
array, enter either:Transmittance values for filters
Quantum efficiency values for sensors
Set the
range
field according to your data scale (e.g.,"range": 100
if your values are percentages,"range": 1
if they are normalized to 1).
How to Contribute
The SPCC database is designed to store JSON files of OSC/monochrome sensors and filters available in the market. Its primary objective is to gather extensive data, fostering collaboration within the community.
We greatly value community contributions and encourage active participation. We are in need of data spanning ideally from 300nm to 1100nm. Software tools can be employed to extract curves/charts found online, and contacting manufacturers directly for data is also an option.
Importante
We do not include narrowband filters. These highly specific filters are synthesized in Siril, ensuring precision. This also applies to duonarrowband filters.
JSON File Format Reference
Here is the template for the JSON files used in the SPCC database:
[
{
"model": "sensor model / filter set",
"name": "sensor / filter name",
"type": "MONO_SENSOR | OSC_SENSOR | MONO_FILTER | OSC_FILTER | OSC_LPF | WB_REF",
"dataQualityMarker": 1 - 5,
"dataSource": "Describe where the data came from",
"manufacturer": "Manufacturer name",
"version": 1,
"channel": "RED | GREEN | BLUE | LUM",
"wavelength": [Comma separated array of wavelengths],
"values": [Comma separated array of values]
}
]
Important Notes
Definition of the
dataQualityMarker
field:Data of unknown provenance. Not accepted for the siril-spcc-database repository.
Data scanned from OEM or other reputable plots in image format.
Lower resolution tabulated data provided by the OEM, or academic data relating to ideal standard filter transmittance (e.g. generic standard photometric filters).
High resolution (no more than 2nm spacing) tabulated data provided by the OEM.
Data specific to your own filter which you have personally calibrated using appropriate equipment. This is the highest possible quality marker and will never be given to .json files in the repository which can only ever be generic to an equipment model, not specific to your individual equipment item. Note that the actual quality of this data is entirely dependent on the quality of your calibration equipment - the old adage "garbage in, garbage out" applies.
The
model
name requirements:Must be identical for all related JSON objects in a set
Examples:
RGB filter set:
"model": "Chroma RGB"
OSC sensor:
"model": "ZWO ASI2600MM"
The
channel
field:Required only for
"type": "OSC_SENSOR"
or"type": "MONO_FILTER"
For OSC sensors, include one JSON object per channel (
RED
,GREEN
,BLUE
)Preferred channel order:
RED
,GREEN
,BLUE
The
wavelength
array requirements:Minimum coverage: 380nm to 700nm
Maximum useful range: 336nm to 1020nm (Gaia DR3 spectra limits)
Values must be monotonically increasing
No duplicate values allowed
Must use specified units (
angstroms
,nm
,micrometres
,m
)
Nota
If your sensor data only extends down to 400nm (which is common with some manufacturers), it is acceptable to extrapolate a single point at 380nm. The sensor response below 400nm typically follows a predictable pattern across different sensors. Adding this extrapolated point at 380nm is preferable to letting the curve end at 400nm, which would effectively treat all response below 400nm as zero. The impact of this extrapolation is minimal since the CIE 1931 response is very low in this wavelength range.
The
values
array requirements:For filters: contains transmittance values
For sensors: contains quantum efficiency values
Set appropriate
range
value (e.g., 100 for percentages)Siril scales all values to 0.0-1.0 range internally
Verifying the Data
We have provided a set of Python scripts in the utils
folder available on the repository to help you work with your data. The following tools are available:
Processing Scripts
process_mono_sensor.py
process_osc_filter.py
process_osc_sensor.py
These scripts can assist in converting CSV files into JSON format.
Visualization Tools
The visualize.py
script allows you to visualize the resulting JSON files. Use it with the following command:
python utils/visualize.py mono_sensors/Sony_IMX.json
This will generate the following image:

Data Validation
The script remove_duplicates.py
helps ensure that your data does not contain any duplicates. It's a useful tool to clean your data before finalizing the JSON file for submission to the database.
Submitting Your Files to the Database
To add a new file to the database, we use the GitLab merge request (MR) process. Don't worry, it's simpler than it sounds! First, you'll need to create a GitLab account if you don't have one yet. Then, follow this quick step-by-step guide for beginners.
Submission Process
Fork the repository
Go to the Siril SPCC Database GitLab page and click the Fork button. This creates your own copy of the database that you can edit.
Clone your fork
Once you've forked the repository, clone it to your computer using this command in your terminal:
git clone https://gitlab.com/your-username/siril-spcc-database.git
Replace
your-username
with your actual GitLab username.Add your file
Place your JSON file in the correct folder:
mono_sensors/
for monochrome sensorsosc_sensors/
for color sensorsmono_filters/
for monochrome filtersosc_filters/
for color filters
Commit your changes
After adding your file, commit the changes with the following commands:
git add . git commit -m "Added data for [Your Sensor/Filter Name]"
Push your changes
Push the changes to your forked repository:
git push origin main
Create a Merge Request
Go back to the original repository on GitLab, and you'll see a button asking if you want to create a Merge Request. Click on it, review your changes, and submit the request. Our team will then review and merge your file into the main database!
Nota
Please open a Merge Request for each new dataset added to facilitate easier review and ensure seamless integration. Thank you for your contributions!