API reference

Data model

The data model.

datamodel Data classes and acceptable variables as defined by the SolarForecastArbiter Data Model document.

There are two kinds of sites:

datamodel.Site(name, latitude, longitude, …) Class for keeping track of Site metadata.
datamodel.SolarPowerPlant(name, latitude, …) Class for keeping track of metadata associated with solar power plant Sites.

The several model parameters are associated with the solar power plant site:

datamodel.PVModelingParameters(ac_capacity, …) Class for keeping track of generic PV modeling parameters
datamodel.FixedTiltModelingParameters(…) A class based on PVModelingParameters that has additional parameters for fixed tilt PV systems.
datamodel.SingleAxisModelingParameters(…) A class based on PVModelingParameters that has additional parameters for single axis tracking systems.

The Observation and Forecast:

datamodel.Observation(name, variable, …) A class for keeping track of metadata associated with an observation.
datamodel.Forecast(name, issue_time_of_day, …) A class to hold metadata for Forecast objects.

Probabilistic forecasts:

datamodel.ProbabilisticForecast(axis, …) Tracks a group of ProbabilisticForecastConstantValue objects that together describe 1 or more points of the same probability distribution.
datamodel.ProbabilisticForecastConstantValue(…) Extends Forecast dataclass to include probabilistic forecast attributes.

Event forecasts:

datamodel.EventForecast(name, …) Extends Forecast dataclass to include event forecast attributes.

Aggregates:

datamodel.AggregateObservation(observation, …) Class for keeping track of an Observation and when it is added and (optionally) removed from an Aggregate.
datamodel.Aggregate(name, description, …) Class for keeping track of Aggregate metadata.

Data validation toolkit filters for use with reports:

datamodel.BaseFilter() Base class for filters to be applied in a report.
datamodel.QualityFlagFilter(quality_flags, …) Quality flag filters to be applied in a report.
datamodel.TimeOfDayFilter(time_of_day_range, …) Class representing a time of day filter to be applied in a report.
datamodel.ValueFilter(metadata, …) Class representing an observation or forecast value filter to be applied in a report.

Containers to associate forecasts and observations for use with reports:

datamodel.ForecastObservation(forecast, …) Class for pairing Forecast and Observation objects for evaluation.
datamodel.ForecastAggregate(forecast, …) Class for pairing Forecast and Aggregate objects for evaluation.
datamodel.ProcessedForecastObservation(name, …) Hold the processed forecast and observation data with the resampling parameters.

Report metrics and validation:

datamodel.MetricResult(name, forecast_id, …) Class for storing the results of many metric calculations for a single observation and forecast pair.
datamodel.MetricValue(category, metric, …) Class for storing the result of a single metric calculation.
datamodel.ValidationResult(flag, count, …) Store the validation result for a flag or combination of flags.
datamodel.PreprocessingResult(name, count) Stores summary information to record preprocessing results that detail how data has been handled.
datamodel.ReportMessage(message, step, …) Class for intercepting errors and warnings associated with report processing.

Report plots:

datamodel.RawReportPlots(figures, …], …) Class for storing collection of all metric plots on a raw report.
datamodel.ReportFigure() Parent class for different types of Report Figures

Reports:

datamodel.ReportParameters(name, start, end, …) Parameters required to define and generate a Report.
datamodel.RawReport(generated_at, timezone, …) Class for holding the result of processing a report request including some metadata, the calculated metrics, plots, the processed forecast/observation data, and messages from report generation.
datamodel.Report(report_parameters, …) Class for keeping track of report metadata and the raw report that can later be rendered to HTML or PDF.

Cost:

datamodel.ConstantCost(cost, aggregation, net) A constant cost per unit error of the forecasted variable
datamodel.TimeOfDayCost(times, …], cost, …) Cost values based on the time of day.
datamodel.DatetimeCost(datetimes, …], …) Cost values based on datetimes.
datamodel.CostBand(error_range, float], …) Cost specification for one error band
datamodel.ErrorBandCost(bands, …]) Cost that varies based on the error value.
datamodel.Cost(name, type, parameters, …) Specify how cost metrics should be calculated.

All datamodel objects have from_dict and to_dict methods:

datamodel.BaseModel.from_dict(input_dict[, …]) Construct a dataclass from the given dict, matching keys with the class fields.
datamodel.BaseModel.to_dict() Convert the dataclass into a dictionary suitable for uploading to the API.

PV modeling

The pvmodel module contains functions closely associated with PV modeling.

pvmodel Calculate AC power and modeling intermediates from system metadata, times, and weather data.

Several utility functions wrap pvlib functions:

pvmodel.calculate_solar_position(latitude, …) Calculates solar position using pvlib’s implementation of NREL SPA.
pvmodel.complete_irradiance_components(ghi, …) Uses the Erbs model to calculate DNI and DHI from GHI.
pvmodel.calculate_clearsky(latitude, …) Calculates clear sky irradiance using the Ineichen model and the SoDa climatological turbidity data set.

Three functions are useful for determining AOI, surface tilt, and surface azimuth. aoi_func_factory() is helpful for standardizing the calculations for tracking and fixed systems. See calculate_poa_effective(), for example.

pvmodel.aoi_func_factory(modeling_parameters) Create a function to calculate AOI, surface tilt, and surface azimuth from system modeling_parameters.
pvmodel.aoi_fixed(surface_tilt, …) Calculate AOI for fixed system, bundle return with tilt, azimuth for consistency with similar tracker function.
pvmodel.aoi_tracking(axis_tilt, …) Calculate AOI, surface tilt, and surface azimuth for tracking system.
pvmodel.calculate_poa_effective_explicit(…) Calculate effective plane of array irradiance from system metadata, solar position, and irradiance components.
pvmodel.calculate_poa_effective(aoi_func, …) Calculate effective plane of array irradiance from system metadata, solar position, and irradiance components.
pvmodel.calculate_power(dc_capacity, …[, …]) Calcuate AC power from system metadata, plane of array irradiance, and weather data using the PVWatts model.
pvmodel.irradiance_to_power(…[, temp_air, …]) Calcuate AC power from system metadata, solar position, and ghi, dni, dhi.

Reference forecasts

Entry points

High-level functions for NWP and persistence forecasts.

reference_forecasts.main.run_nwp(forecast, …) Calculate benchmark irradiance and power forecasts for a Forecast or ProbabilisticForecast.
reference_forecasts.main.run_persistence(…) Run a persistence forecast for an observation.
reference_forecasts.main.find_reference_nwp_forecasts(…) Sort through all forecasts to find those that should be generated by the Arbiter from NWP models.
reference_forecasts.main.process_nwp_forecast_groups(…) Groups NWP forecasts based on piggyback_on, calculates the forecast as appropriate for run_time, and uploads the values to the API.
reference_forecasts.main.make_latest_nwp_forecasts(…) Make all reference NWP forecasts for run_time that are within issue_buffer of the next issue time for the forecast.
reference_forecasts.main.fill_nwp_forecast_gaps(…) Make all reference NWP forecasts that are missing from start to end.
reference_forecasts.main.make_latest_persistence_forecasts(…) Make all reference persistence forecasts that need to be made up to max_run_time.
reference_forecasts.main.make_latest_probabilistic_persistence_forecasts(…) Make all reference probabilistic persistence forecasts that need to be made up to max_run_time.
reference_forecasts.main.fill_persistence_forecasts_gaps(…) Make all reference persistence forecasts that need to be made between start and end.
reference_forecasts.main.fill_probabilistic_persistence_forecasts_gaps(…) Make all reference probabilistic persistence forecasts that need to be made between start and end.

NWP models

reference_forecasts.models.hrrr_subhourly_to_subhourly_instantaneous(…) Subhourly (15 min) instantantaneous HRRR forecast.
reference_forecasts.models.hrrr_subhourly_to_hourly_mean(…) Hourly mean HRRR forecast.
reference_forecasts.models.rap_ghi_to_instantaneous(…) Hourly instantantaneous RAP forecast.
reference_forecasts.models.rap_cloud_cover_to_hourly_mean(…) Take hourly RAP instantantaneous cloud cover and convert it to hourly average forecasts.
reference_forecasts.models.gfs_quarter_deg_3hour_to_hourly_mean(…) Take 3 hr GFS and convert it to hourly average data.
reference_forecasts.models.gfs_quarter_deg_hourly_to_hourly_mean(…) Take 1 hr GFS and convert it to hourly average data.
reference_forecasts.models.gfs_quarter_deg_to_hourly_mean(…) Hourly average forecasts derived from GFS 1, 3, and 12 hr frequency output.
reference_forecasts.models.nam_12km_hourly_to_hourly_instantaneous(…) Hourly instantantaneous forecast.
reference_forecasts.models.nam_12km_cloud_cover_to_hourly_mean(…) Hourly average forecast.

Probabilistic NWP models

reference_forecasts.models.gefs_half_deg_to_hourly_mean(…) Hourly average forecasts derived from GEFS 3, 6, and 12 hr frequency output.

Forecast processing

Functions that process forecast data.

reference_forecasts.forecast.cloud_cover_to_ghi_linear(…) Convert cloud cover to GHI using a linear relationship.
reference_forecasts.forecast.cloud_cover_to_irradiance_ghi_clear(…) Estimates irradiance from cloud cover in the following steps:
reference_forecasts.forecast.cloud_cover_to_irradiance(…) Estimates irradiance from cloud cover in the following steps:
reference_forecasts.forecast.resample(arg[, …]) Resamples an argument, allowing for None.
reference_forecasts.forecast.reindex_fill_slice(arg) Reindex data to shorter intervals (create NaNs) from start to end, fill NaNs in gaps using fill_method, fill NaNs before first valid time using bfill, fill NaNs after last valid time using ffill, then slice output from start_slice to end_slice.
reference_forecasts.forecast.unmix_intervals(mixed) Convert mixed interval averages into pure interval averages.
reference_forecasts.forecast.sort_gefs_frame(frame) Sort a DataFrame from a GEFS forecast.

Persistence

reference_forecasts.persistence.persistence_scalar(…) Make a persistence forecast using the mean value of the observation from data_start to data_end.
reference_forecasts.persistence.persistence_interval(…) Make a persistence forecast for an observation using the mean values of each interval_length bin from data_start to data_end.
reference_forecasts.persistence.persistence_scalar_index(…) Calculate a persistence forecast using the mean value of the observation clear sky index or AC power index from data_start to data_end.

Probabilistic persistence

reference_forecasts.persistence.persistence_probabilistic(…) Make a probabilistic persistence forecast using the observation from data_start to data_end.
reference_forecasts.persistence.persistence_probabilistic_timeofday(…) Make a probabilistic persistence forecast using the observation from data_start to data_end, matched by time of day (e.g.

Fetching external data

ARM

io.fetch.arm.format_date(date_object)
io.fetch.arm.request_arm_file_list(user_id, …) Make an http request to the ARM live API for filenames between start and end.
io.fetch.arm.list_arm_filenames(user_id, …) Get a list of filenames from ARM for the given datastream between start and end.
io.fetch.arm.request_arm_file(user_id, …) Get a file from ARM live in the form of a stream so that the python netCDF4 module can read it.
io.fetch.arm.retrieve_arm_dataset(user_id, …) Request a file from the ARM Live API and return a netCDF4 Dataset.
io.fetch.arm.extract_arm_variables(nc_file, …) Extracts variables and datetime index from an ARM netcdf.
io.fetch.arm.fetch_arm(user_id, api_key, …) Gets data from ARM API and concatenates requested datastreams into a single Pandas Dataframe.

NWP

io.fetch.nwp.get_with_retries(get_func, *args) Call get_func and retry if the request fails
io.fetch.nwp.get_available_dirs(session, model) Get the available date/date+init_hr directories
io.fetch.nwp.check_next_inittime(session, …) Check if data from the next model initializtion time is available
io.fetch.nwp.get_filename(basepath, …)
io.fetch.nwp.files_to_retrieve(session, …) Generator to return the parameters of the available files for download
io.fetch.nwp.process_grib_to_netcdf(folder, …)
io.fetch.nwp.optimize_netcdf(nctmpfile, …) Compress the netcdf file and adjust the chunking for fast time-series access
io.fetch.nwp.sleep_until_inittime(inittime, …)
io.fetch.nwp.startup_find_next_runtime(…) Find the next model run to get based on what is available on NOMADS and what .nc files are present locally
io.fetch.nwp.next_run_time(inittime, …)
io.fetch.nwp.run(basepath, model_name, chunksize)
io.fetch.nwp.optimize_only(path_to_files, …)
io.fetch.nwp.check_wgrib2()

DOE RTC

io.fetch.rtc.request_doe_rtc_data(location, …) Makes a request to DOE RTC pv dashboard with the provided parameters.
io.fetch.rtc.fetch_doe_rtc(location, …) Requests and concatenates data from the DOE RTC pv dashboard API into a single dataframe.

NREL PVDAQ

io.fetch.pvdaq.get_pvdaq_metadata(system_id, …) Query PV system metadata from NREL’s PVDAQ data service.
io.fetch.pvdaq.get_pvdaq_data(system_id, year) Query PV system data from NREL’s PVDAQ data service:

EIA

io.fetch.eia.get_eia_data(series_id, …) Retrieve data from the EIA Open Data API.

BSRN

io.fetch.bsrn.parse_bsrn(fbuf) Parse a buffered BSRN station-to-archive file into a DataFrame.
io.fetch.bsrn.read_bsrn_from_nasa_larc(…) Read a range of BRSN monthly data from the NASA LARC.
io.fetch.bsrn.read_bsrn_month_from_nasa_larc(…) Read one month of BSRN data from the NASA LARC.

Reference observations

The following modules contain code for initializing the reference database, wrappers for fetching data, functions for processing (e.g. renaming and resampling) data, and wrapper functions for posting data. The pure fetch functions are found in pvlib.iotools and in solarforecastarbiter.io.fetch. See the source code for additional files with site and observation metadata.

io.reference_observations.common
io.reference_observations.crn
io.reference_observations.midc_config Contains a config dictionary mapping SFA variable names to MIDC fields.
io.reference_observations.midc
io.reference_observations.reference_data A CLI tool for creating reference sites and observations and updating them with data from their respective API.
io.reference_observations.rtc
io.reference_observations.solrad
io.reference_observations.srml
io.reference_observations.surfrad Functions for Creating and Updating NOAA SURFRAD related objects within the SolarForecastArbiter
io.reference_observations.arm
io.reference_observations.pvdaq
io.reference_observations.eia
io.reference_observations.bsrn Initialize site obs/forecasts and fetch/update obs for BSRN sites.
io.reference_observations.pnnl Initialize site obs/forecasts and fetch/update obs for PNNL site.

Reference aggregates

The following modules contain code for initializing the reference aggregates using Reference Observations that have already been created. Examples include average GHI and DNI at SURFRAD sites, and the total PV power in the Portland, OR area of UO SRML sites.

io.reference_aggregates.generate_aggregate(…) Generate an aggregate object.
io.reference_aggregates.make_reference_aggregates(…) Create the reference aggregates in the API.

SFA API

To pass API calls through a proxy server, set either the HTTP_PROXY or HTTPS_PROXY environment variable. If necessary, set a SSL certificate using the REQUESTS_CA_BUNDLE environment variable.

Token

Get an API token.

io.api.request_cli_access_token(user, …) Request an API access token from Auth0.

API Session

Class for communicating with the Solar Forecast Arbiter API.

io.api.APISession(access_token[, …]) Subclass of requests.Session to handle requets to the SolarForecastArbiter API.
io.api.APISession.request(method, url, …) Modify the default Session.request to add in the default timeout and make requests relative to the base_url.
io.api.APISession.get_user_info() Get information about the current user from the API

Sites

io.api.APISession.get_site(site_id) Retrieve site metadata for site_id from the API and process into the proper model.
io.api.APISession.list_sites() List all the sites available to a user.
io.api.APISession.list_sites_in_zone(zone) List all the sites available to a user in the given climate zone.
io.api.APISession.create_site(site) Create a new site in the API with the given Site model

Observations

io.api.APISession.get_observation(observation_id) Get the metadata from the API for the a given observation_id in an Observation object.
io.api.APISession.list_observations() List the observations a user has access to.
io.api.APISession.create_observation(observation) Create a new observation in the API with the given Observation model
io.api.APISession.get_observation_values(…) Get observation values from start to end for observation_id from the API
io.api.APISession.post_observation_values(…) Upload the given observation values to the appropriate observation_id of the API.
io.api.APISession.get_observation_time_range(…) Get the minimum and maximum timestamps for observation values.
io.api.APISession.get_observation_values_not_flagged(…) Get the dates where the observation series is NOT flagged with the given flag/bitmask.
io.api.APISession.get_observation_value_gaps(…) Get any gaps in observation data from start to end.

Forecasts

io.api.APISession.get_forecast(forecast_id) Get Forecast metadata from the API for the given forecast_id
io.api.APISession.list_forecasts() List all Forecasts a user has access to.
io.api.APISession.create_forecast(forecast) Create a new forecast in the API with the given Forecast model
io.api.APISession.get_forecast_values(…[, …]) Get forecast values from start to end for forecast_id
io.api.APISession.post_forecast_values(…) Upload the given forecast values to the appropriate forecast_id of the API
io.api.APISession.get_forecast_time_range(…) Get the miniumum and maximum timestamps for forecast values.
io.api.APISession.get_forecast_value_gaps(…) Get any gaps in forecast data from start to end.

Probabilistic Forecasts

io.api.APISession.get_probabilistic_forecast(…) Get ProbabilisticForecast metadata from the API for the given forecast_id.
io.api.APISession.list_probabilistic_forecasts() List all ProbabilisticForecasts a user has access to.
io.api.APISession.create_probabilistic_forecast(…) Create a new forecast in the API with the given ProbabilisticForecast model
io.api.APISession.get_probabilistic_forecast_values(…) Get all probabilistic forecast values for each from start to end for forecast_id
io.api.APISession.get_probabilistic_forecast_value_gaps(…) Get any gaps in forecast data from start to end.
io.api.APISession.get_probabilistic_forecast_constant_value(…) Get ProbabilisticForecastConstantValue metadata from the API for the given forecast_id.
io.api.APISession.get_probabilistic_forecast_constant_value_values(…) Get forecast values from start to end for forecast_id
io.api.APISession.post_probabilistic_forecast_constant_value_values(…) Upload the given forecast values to the appropriate forecast_id of the API
io.api.APISession.get_probabilistic_forecast_constant_value_time_range(…) Get the miniumum and maximum timestamps for forecast values.
io.api.APISession.get_probabilistic_forecast_constant_value_value_gaps(…) Get any gaps in forecast data from start to end.

Aggregates

io.api.APISession.get_aggregate(aggregate_id) Get Aggregate metadata from the API for the given aggregate_id
io.api.APISession.list_aggregates() List all Aggregates a user has access to.
io.api.APISession.create_aggregate(aggregate) Create a new aggregate in the API with the given Aggregate model
io.api.APISession.get_aggregate_values(…) Get aggregate values from start to end for aggregate_id from the API

Reports

io.api.APISession.process_report_dict(rep_dict) Load parameters from rep_dict into a Report object, getting forecasts and observations as necessary
io.api.APISession.get_report(report_id) Get the metadata, and possible raw report if it has processed, from the API for the given report_id in a Report object.
io.api.APISession.list_reports() List the reports a user has access to.
io.api.APISession.create_report(report) Post the report request to the API.
io.api.APISession.post_raw_report_processed_data(…) Post the processed data that was used to make the report to the API.
io.api.APISession.get_raw_report_processed_data(…) Load the processed forecast/observation data into the datamodel.ProcessedForecastObservation objects of the raw_report.
io.api.APISession.post_raw_report(report_id, …) Update the report with the raw report and metrics
io.api.APISession.update_report_status(…) Update the status of the report

Climate Zones

io.api.APISession.list_sites_in_zone(zone) List all the sites available to a user in the given climate zone.
io.api.APISession.search_climatezones(…) Find all climate zones that the location is in.

Convenience method for unifying API for getting time series values for observations, forecasts, aggregates, and probabilistic forecasts:

io.api.APISession.get_values(obj, start, end) Get time series values from start to end for object from the API
io.api.APISession.chunk_value_requests(…) Breaks up a get requests for values into multiple requests limited by the request_limit argument.
io.api.APISession.get_value_gaps(obj, start, end) Get gaps in the time series values from start to end for object from the API.

Utils

Utility functions for data IO.

io.utils.observation_df_to_json_payload(…) Extracts a variable from an observation DataFrame and formats it into a JSON payload for posting to the Solar Forecast Arbiter API.
io.utils.forecast_object_to_json(forecast_series) Converts a forecast Series to JSON to post to the SolarForecastArbiter API.
io.utils.json_payload_to_observation_df(…) Convert the JSON payload dict as returned by the SolarForecastArbiter API observations/values endpoint into a DataFrame
io.utils.json_payload_to_forecast_series(…) Convert the JSON payload dict as returned by the SolarForecastArbiter API forecasts/values endpoing into a Series
io.utils.adjust_start_end_for_interval_label(…) Adjusts the start and end times depending on the interval_label.
io.utils.adjust_timeseries_for_interval_label(…) Adjusts the index of the data depending on the interval_label, start, and end.
io.utils.ensure_timestamps(*time_args) Decorator that converts the specified time arguments of the wrapped function to pandas.Timestamp objects
io.utils.serialize_timeseries(ser) Serialize a timeseries to JSON.
io.utils.deserialize_timeseries(data) Deserializes a timeseries from JSON
io.utils.load_report_values(raw_report, values) Load the processed forecast/observation data into the datamodel.ProcessedForecastObservation objects of the raw_report.
io.utils.mock_raw_report_endpoints(base_url) Mock API report endpoints under base_url to enable testing of the report generation task run via the dashboard.

Metrics

Entry points for calculating metrics for Forecast and Observation:

metrics.calculator.calculate_metrics(…) Loop through the forecast-observation pairs and calculate metrics.
metrics.calculator.calculate_deterministic_metrics(…) Calculate deterministic metrics for the processed data using the provided categories and metric types.
metrics.calculator.calculate_probabilistic_metrics(…) Calculate probabilistic metrics for the processed data using the provided categories and metric types.
metrics.calculator.calculate_event_metrics(…) Calculate event metrics for the processed data using the provided categories and metric types.
metrics.calculator.calculate_all_summary_statistics(…) Loop through the forecast-observation pairs and calculate summary statistics.
metrics.calculator.calculate_summary_statistics(…) Calculate summary statistics for the processed data using the provided categories and all metrics defined in summary.

Preprocessing

Functions for preparing the timeseries data before calculating metrics:

metrics.preprocessing.check_reference_forecast_consistency(…) Filter and resample the observation to the forecast interval length.
metrics.preprocessing.apply_fill(fx_data, …) Apply fill procedure to the data from the start to end timestamps.
metrics.preprocessing.filter_resample(…) Filter and resample the observation to the forecast interval length.
metrics.preprocessing.align(fx_obs, fx_data, …) Align the observation data to the forecast data.
metrics.preprocessing.process_forecast_observations(…) Convert ForecastObservations into ProcessedForecastObservations applying any filters and resampling to align forecast and observation.
metrics.preprocessing.outage_periods(…) Converts report outage periods to forecast data periods to drop from analysis.
metrics.preprocessing.remove_outage_periods(…) Returns a copy of a dataframe with all values within an outage period dropped.

Deterministic

Functions to compute forecast deterministic performance metrics:

metrics.deterministic.mean_absolute(obs, fx) Mean absolute error (MAE).
metrics.deterministic.mean_bias(obs, fx[, …]) Mean bias error (MBE).
metrics.deterministic.root_mean_square(obs, fx) Root mean square error (RMSE).
metrics.deterministic.normalized_mean_absolute(…) Normalized mean absolute error (NMAE).
metrics.deterministic.normalized_mean_bias(…) Normalized mean bias error (NMBE).
metrics.deterministic.normalized_root_mean_square(…) Normalized root mean square error (NRMSE).
metrics.deterministic.centered_root_mean_square(obs, fx) Centered (unbiased) root mean square error (CRMSE):
metrics.deterministic.mean_absolute_percentage(obs, fx) Mean absolute percentage error (MAPE).
metrics.deterministic.forecast_skill(obs, …) Forecast skill (s).
metrics.deterministic.pearson_correlation_coeff(obs, fx) Pearson correlation coefficient (r).
metrics.deterministic.coeff_determination(obs, fx) Coefficient of determination (R^2).
metrics.deterministic.relative_euclidean_distance(obs, fx) Relative Euclidean distance (D):
metrics.deterministic.kolmogorov_smirnov_integral(obs, fx) Kolmogorov-Smirnov Test Integral (KSI).
metrics.deterministic.over(obs, fx) The OVER metric.
metrics.deterministic.combined_performance_index(obs, fx) Combined Performance Index (CPI) metric.

Functions to compute costs:

metrics.deterministic.constant_cost(obs, fx, …) Compute cost using a constant cost value.
metrics.deterministic.time_of_day_cost(obs, …) Compute cost using a time-of-day varying cost value.
metrics.deterministic.datetime_cost(obs, fx, …) Compute cost using a date-time varying cost value.
metrics.deterministic.error_band_cost(obs, …) Compute cost according to various functions applied to specified error bands.
metrics.deterministic.cost(obs, fx, cost_params) Compute the cost for forecast errors according to cost_params.

Functions to compute errors and deadbands:

metrics.deterministic.deadband_mask(obs, fx, …) Calculate deadband mask.
metrics.deterministic.error(obs, fx) The difference \(fx - obs\)
metrics.deterministic.error_deadband(obs, …) Error fx - obs, accounting for a deadband.

Probabilistic

Functions to compute forecast probabilistic performance metrics:

metrics.probabilistic.brier_score(obs, fx, …) Brier Score (BS).
metrics.probabilistic.brier_skill_score(obs, …) Brier Skill Score (BSS).
metrics.probabilistic.quantile_score(obs, …) Quantile Score (QS).
metrics.probabilistic.quantile_skill_score(…) Quantile Skill Score (QSS).
metrics.probabilistic.brier_decomposition(…) The 3-component decomposition of the Brier Score.
metrics.probabilistic.reliability(obs, fx, …) Reliability (REL) of the forecast.
metrics.probabilistic.resolution(obs, fx, …) Resolution (RES) of the forecast.
metrics.probabilistic.uncertainty(obs, fx, …) Uncertainty (UNC) of the forecast.
metrics.probabilistic.sharpness(fx_lower, …) Sharpness (SH).
metrics.probabilistic.continuous_ranked_probability_score(…) Continuous Ranked Probability Score (CRPS).
metrics.probabilistic.crps_skill_score(obs, …) CRPS skill score.

Event

Functions to compute deterministic event forecast performance metrics:

metrics.event.probability_of_detection(obs, fx) Probability of Detection (POD).
metrics.event.false_alarm_ratio(obs, fx) False Alarm Ratio (FAR).
metrics.event.probability_of_false_detection(obs, fx) Probability of False Detection (POFD).
metrics.event.critical_success_index(obs, fx) Critical Success Index (CSI).
metrics.event.event_bias(obs, fx) Event Bias (EBIAS).
metrics.event.event_accuracy(obs, fx) Event Accuracy (EA).

Reports

Main

Functions to compute the report.

reports.main.compute_report(access_token, …) Create a raw report using data from API.
reports.main.get_data_for_report(session, report) Get data for report.
reports.main.create_raw_report_from_data(…) Create a raw report using data and report metadata.

Figures

Functions for generating Plotly report metric figures.

reports.figures.plotly_figures.construct_metrics_dataframe(metrics) Possibly bad assumptions: * metrics contains keys: name, Total, etc.
reports.figures.plotly_figures.construct_timeseries_dataframe(report) Construct two standardized Dataframes for the timeseries and scatter plot functions.
reports.figures.plotly_figures.bar(df, metric) Create a bar graph comparing a single metric across forecasts.
reports.figures.plotly_figures.bar_subdivisions(df, …) Create bar graphs comparing a single metric across subdivisions of time for multiple forecasts.
reports.figures.plotly_figures.output_svg(fig) Generates an SVG from the Plotly figure.
reports.figures.plotly_figures.raw_report_plots(…) Create a RawReportPlots object from the metrics of a report.
reports.figures.plotly_figures.timeseries_plots(report) Return the components for timeseries and scatter plots of the processed forecasts and observations.
reports.figures.plotly_figures.timeseries(…) Timeseries plot of one or more forecasts and observations.
reports.figures.plotly_figures.scatter(…) Adds Scatter plot traces of one or more forecasts and observations to the figure.

Functions for generating Bokeh plots.

reports.figures.bokeh_figures.construct_timeseries_cds(report) Construct two standardized Bokeh CDS for the timeseries and scatter plot functions.
reports.figures.bokeh_figures.construct_metrics_cds(metrics) Possibly bad assumptions: * metrics contains keys: name, Total, etc.
reports.figures.bokeh_figures.timeseries(…) Timeseries plot of one or more forecasts and observations.
reports.figures.bokeh_figures.scatter(…) Scatter plot of one or more forecasts and observations.
reports.figures.bokeh_figures.bar(cds, metric) Create a bar graph comparing a single metric across forecasts.
reports.figures.bokeh_figures.bar_subdivisions(…) Create bar graphs comparing a single metric across subdivisions of time for multiple forecasts.
reports.figures.bokeh_figures.output_svg(fig) Generates an SVG from the Bokeh figure.
reports.figures.bokeh_figures.raw_report_plots(…) Create a RawReportPlots object from the metrics of a report.
reports.figures.bokeh_figures.timeseries_plots(report) Return the bokeh components (script and div element) for timeseries and scatter plots of the processed forecasts and observations.

Template

Functions to generate output (HTML, PDF) for reports

reports.template.render_html(report[, …]) Create full html file.
reports.template.get_template_and_kwargs(…) Returns the jinja2 Template object and a dict of template variables for the report.
reports.template.render_pdf(report, dash_url) Create a PDF report using LaTeX.

Validation

Validator

Functions to perform validation.

validation.validator.check_ghi_limits_QCRad(…) Tests for physical limits on GHI using the QCRad criteria.
validation.validator.check_dhi_limits_QCRad(…) Tests for physical limits on DHI using the QCRad criteria.
validation.validator.check_dni_limits_QCRad(…) Tests for physical limits on DNI using the QCRad criteria.
validation.validator.check_irradiance_limits_QCRad(…) Tests for physical limits on GHI, DHI or DNI using the QCRad criteria.
validation.validator.check_irradiance_consistency_QCRad(…) Checks consistency of GHI, DHI and DNI.
validation.validator.check_temperature_limits(…) Checks for extreme temperatures.
validation.validator.check_wind_limits(…) Checks for extreme wind speeds.
validation.validator.check_rh_limits(rh[, …]) Checks for extreme relative humidity.
validation.validator.check_ac_power_limits(…) Check for extreme AC power.
validation.validator.check_dc_power_limits(…) Check for extreme AC power.
validation.validator.check_ghi_clearsky(ghi, …) Flags GHI values greater than clearsky values.
validation.validator.check_poa_clearsky(…) Flags plane of array irradiance values greater than clearsky values.
validation.validator.check_day_night(…[, …]) Check for day/night periods based on solar zenith.
validation.validator.check_day_night_interval(…) Check for day/night periods based on solar zenith.
validation.validator.check_timestamp_spacing(…) Checks if spacing between times conforms to freq.
validation.validator.detect_stale_values(x) Detects stale data.
validation.validator.detect_interpolation(x) Detects sequences of data which appear linear.
validation.validator.detect_levels(x[, …]) Detects plateau levels in data.
validation.validator.detect_clipping(ac_power) Detects clipping in a series of AC power.
validation.validator.detect_clearsky_ghi(…) Identifies times when GHI is consistent with clear sky conditions.
validation.validator.stale_interpolated_window(…) Returns the recommended window size for detect stale and interpolation functions

Tasks

Perform a sequence of validation steps. Used by the API to initiate validation.

validation.tasks.validate_ghi(observation, …) Run validation checks on a GHI observation.
validation.tasks.validate_dni(observation, …) Run validation checks on a DNI observation.
validation.tasks.validate_dhi(observation, …) Run validation checks on a DHI observation.
validation.tasks.validate_poa_global(…) Run validation checks on a POA observation.
validation.tasks.validate_dc_power(…) Run a number of validation checks on a daily timeseries of DC power.
validation.tasks.validate_ac_power(…) Run a number of validation checks on a daily timeseries of AC power.
validation.tasks.validate_defaults(…) Run default validation checks on an observation.
validation.tasks.validate_air_temperature(…) Run validation checks on an air temperature observation.
validation.tasks.validate_wind_speed(…) Run validation checks on a wind speed observation.
validation.tasks.validate_relative_humidity(…) Run validation checks on a relative humidity observation.
validation.tasks.validate_daily_ghi(…) Run validation on a daily timeseries of GHI.
validation.tasks.validate_daily_dc_power(…) Run validation on a daily timeseries of DC power.
validation.tasks.validate_daily_ac_power(…) Run a number of validation checks on a daily timeseries of AC power.
validation.tasks.validate_daily_defaults(…) Run default daily validation checks on an observation.
validation.tasks.apply_immediate_validation(…) Apply the appropriate validation functions to the observation_values.
validation.tasks.apply_daily_validation(…) Apply the appropriate daily validation functions to the observation_values.
validation.tasks.apply_validation(…) Applies the appropriate daily or immediate validation functions to the observation_values depending on the length of the data.

Quality flag mapping

Functions to handle the translation of validation results and database storage.

validation.quality_mapping.convert_bool_flags_to_flag_mask(…)
validation.quality_mapping.mask_flags(…[, …]) Decorator that will convert a boolean pandas object into an integer, bitmasked object when _return_mask=True.
validation.quality_mapping.has_data_been_validated(flags) Return True (or a boolean series) if flags has been validated
validation.quality_mapping.get_version(flag) Extract the version from flag
validation.quality_mapping.check_if_single_value_flagged(…) Check if the single integer flag has been flagged for flag_description
validation.quality_mapping.which_data_is_ok(flags) Return True for flags that have been validated and are OK
validation.quality_mapping.check_for_all_descriptions(flag) Return a boolean Series indicating the checks a flag represents
validation.quality_mapping.convert_mask_into_dataframe(…) Convert flag_series into a boolean DataFrame indicating which checks the flags represent.
validation.quality_mapping.convert_flag_frame_to_strings(…) Convert the flag_frame output of convert_mask_into_dataframe() into a pandas.Series of strings which are the active flag names separated by sep.
validation.quality_mapping.check_if_series_flagged(…) Check if flag_series has been flagged for the checks given by flag_description

Plotting

Timeseries

Time series plotting.

plotting.timeseries.build_figure_title(…) Builds a title for the plot
plotting.timeseries.make_quality_bars(…) Make figures to display the whether a time is flagged for any of the columns in source.
plotting.timeseries.add_hover_tool(fig, …) Add a hover tool to fig.
plotting.timeseries.make_basic_timeseries(…) Make a basic timeseries plot (with either a step or line) and add a hover tool.
plotting.timeseries.generate_forecast_figure(…) Creates a bokeh timeseries figure for forcast data
plotting.timeseries.generate_observation_figure(…) Creates a bokeh figure from API responses for an observation
plotting.timeseries.generate_probabilistic_forecast_figure(…) Creates a plotly figure spec from api response for a probabilistic forecast group.

Utils

Utility functions for plotting.

plotting.utils.format_variable_name(variable) Make a human readable name, with units, for the variable
plotting.utils.align_index(df, interval_length) Align the index to the specified interval_length inserting NaNs as appropriate.
plotting.utils.line_or_step(interval_label) For a given interval_label, determine the plot_method of the data, any kwargs for that plot method, and kwargs for adding a hovertool for the data.

Generic Utilities

Generic utility functions.

utils.compute_aggregate(data, …[, new_index]) Computes an aggregate quantity according to agg_func of the data.
utils.sha256_pandas_object_hash(obj) Compute a hash for a pandas object.
utils.generate_continuous_chunks(data, freq) Generator to split data into continuous chunks with spacing of freq.
utils.merge_ranges(ranges) Generator to merge the ranges like (min_val, max_val) removing any overlap.