evaluator

This module is used to create and run Evaluators for different models.

Evaluators allow for rapid model simulating or solving with input variable manipulation and output variable filtering.

Currently there are four specific evaluators: EvaluatorEP (for EnergyPlus), EvaluatorEH (for PyEHub) EvaluatorGeneric (for custom functions), and AdaptiveSR (for adaptive sampling). The Evaluators wrap their respective modeling tools with the evaluator interface.

class evaluator.AbstractEvaluator(problem: Problem, error_mode: str = 'Failfast', error_value: Optional[tuple] = None, progress_bar: bool = True)[source]

Base class for Evaluators. This template requires that Evaluators are callable. It also gives them the df_apply method and result caching.

Takes a list of values parameterising a model and return objective/constraint results.

Evaluates each row in a DataFrame and return the output in another DataFrame (optionally including the input) and caches results

Parameters:
  • problem – description of the inputs and outputs the evaluator will use

  • error_mode – One of {‘Failfast’, ‘Silent’, ‘Print’}. Failfast: Stop evaluating as soon as an error is encountered. Silent: Evaluation will return the error_value for any input values that raise an error. Print: Same as silent, but warnings are printed to stderr for any errors.

  • error_value – The value of the evaluation if an error occurs. Incompatible with error_mode=’Failfast’. Must be a tuple consisting of values for the objectives followed by values for the constraints.

  • progress_bar – whether or not to display a progress bar

cache_clear() None[source]

Clears any cached vales of calls to this evaluator. This should be called whenever the evaluator’s outputs could have changed.

df_apply(df: DataFrame, keep_input=False, processes: int = 1, **kwargs) DataFrame[source]

Applies this evaluator to an entire dataFrame, row by row.

Parameters:
  • df – a DataFrame where each row represents valid input values for this Evaluator.

  • keep_input – whether to include the input data in the returned DataFrame

  • processes – amount of cores to use

Returns:

Returns a DataFrame with one column containing the results for each objective.

estimate_time(df: DataFrame, processes: int = 1) None[source]

Prints out a very rough estimate of the amount of time a job will take to complete. Will underestimate smaller sample sets but becomes more accurate as they become larger.

Parameters:
  • df – a DataFrame where each row represents valid input values for this Evaluator.

  • processes – amount of cores to use

abstract eval_single(values: Sequence, **kwargs) tuple[source]

Returns the objective results for a single list of parameter values.

Parameters:
  • values – A list of values to set each parameter to, in the same order as this evaluator’s inputs

  • kwargs – Any keyword arguments

Returns:

a tuple of the objectives and constraints

to_platypus() Problem[source]

Converts this evaluator (and the underlying problem) to a platypus compatible format

Returns:

A platypus Problem that can optimise over this evaluator

update_pbar()[source]

Updates the progress bar, marking one more row as completed.

validate(values: Sequence) None[source]

Takes a list of values and checks that they are a valid input for this evaluator.

class evaluator.AdaptiveSR(reference: Optional[AbstractEvaluator] = None, error_mode: str = 'Failfast', error_value: Optional[Sequence] = None)[source]

A Template for making adaptive sampling based models compatible with the evaluator interface.

Wraps a user specified model training process Evaluates the current model Retrains the model on new data Records training data Clears cache when retrained Has a reference evaluator used as a ground-truth Optional: - Finds the best points to add to the model - Update the model without fully retraining TODO: make a version that can wrap a scikit-learn pipeline to reduce boilerplate TODO: make a version that can bundle multiple single objective models together

Parameters:
  • problem – description of the inputs and outputs the evaluator will use

  • error_mode – One of {‘Failfast’, ‘Silent’, ‘Print’}. Failfast: Stop evaluating as soon as an error is encountered. Silent: Evaluation will return the error_value for any input values that raise an error. Print: Same as silent, but warnings are printed to stderr for any errors.

  • error_value – The value of the evaluation if an error occurs. Incompatible with error_mode=’Failfast’. Must be a tuple consisting of values for the objectives followed by values for the constraints.

  • progress_bar – whether or not to display a progress bar

append_data(data: Union[DataFrame, array], deduplicate: bool = True) None[source]

Adds the X and y data to input_data and output_data respectively

Parameters:
  • data – a table of training data to store

  • deduplicate – whether to remove duplicates from the combined DataFrame

Returns:

do_infill(data: DataFrame) None[source]

Updates the model using the inputs X and outputs y, and stores the added data

Parameters:

data – a table of training data

Returns:

None

abstract eval_single(values: Sequence, **kwargs) Tuple[source]

Evaluates a single input point

Parameters:
  • values – The datapoint to evaluate

  • kwargs – Arbitrary keyword arguments.

Returns:

A tuple of the predicted outputs for this datapoint

get_from_reference(X: Union[DataFrame, array]) DataFrame[source]

Use the reference evaluator to get the real value of a dataframe of datapoints

Parameters:

X – a table containing the datapoints to evaluate

Returns:

a DataFrame containing the results of the datapoints

get_infill(num_datapoints: int) Union[DataFrame, array][source]

Generates data that is most likely to improve the model, and can be used for retraining.

Parameters:

num_datapoints – the number of datapoints to generate

Returns:

the datapoints generated, in some tabular datastructure

infill(num_datapoints: int) None[source]

Adds num_datapoints samples to the model and updates it.

Parameters:

num_datapoints – number of datapoints to add to the model’s training set

Returns:

None

abstract train() None[source]

Generates a new model using the stored data, and stores it as self.model

update_model(new_data: Union[DataFrame, array], old_data: Optional[DataFrame] = None) None[source]

Modifies self.model to incorporate the new data.

This function should not edit the existing data

Parameters:
  • new_data – a table of inputs and outputs

  • old_data – the table of inputs and outputs without the new data

Returns:

None

class evaluator.EvaluatorEH(problem: Problem, hub, out_dir: Optional[Union[PathLike, str]] = None, err_dir: Union[PathLike, str] = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/besos/checkouts/latest/docs/BESOS_Errors'), error_mode: str = 'Failfast', error_value: Optional[Sequence] = None, progress_bar: bool = True)[source]

This evaluator uses a Problem to modify an energy hub, and then solve it.

Parameters:
  • problem – a parametrization of the hub and the desired outputs

  • hub – the energy hub that is being simulated.

  • out_dir – the directory used for files created by the PyEHub simulation.

  • err_dir – the directory where files from a failed run are stored.

  • error_mode – One of {‘Failfast’, ‘Silent’, ‘Print’}. Failfast: Any error aborts the evaluation. Silent: Evaluation will return the error_value for any lists of values that raise an error. Print: Same as silent, but warnings are printed to stderr for any errors.

  • error_value – The value of the evaluation if an error occurs. Incompatible with error_mode=’Failfast’.

  • progress_bar – whether or not to display a progress bar

eval_single(values: Sequence) tuple[source]

Returns the objective results for a single list of parameter values.

Parameters:
  • values – A list of values to set each parameter to, in the same order as this evaluator’s inputs

  • kwargs – Any keyword arguments

Returns:

a tuple of the objectives and constraints

validate(values: Sequence) None[source]

Takes a list of values and checks that they are a valid input for this evaluator.

class evaluator.EvaluatorEP(problem: Problem, building, epw: Union[PathLike, str] = '/home/docs/checkouts/readthedocs.org/user_builds/besos/envs/latest/lib/python3.7/site-packages/besos/data/example_epw.epw', out_dir: Optional[Union[PathLike, str]] = None, err_dir: Union[PathLike, str] = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/besos/checkouts/latest/docs/BESOS_Errors'), error_mode: str = 'Failfast', error_value: Optional[Sequence] = None, version=None, progress_bar: bool = True, ep_path: Optional[Union[PathLike, str]] = None, *, epw_file: Optional[Union[PathLike, str]] = None)[source]

This evaluator uses a Problem to modify a building, and then simulate it. It keeps track of the building and the weather file.

Parameters:
  • problem – a parametrization of the building and the desired outputs

  • building – the building that is being simulated.

  • epw – path to the epw file representing the weather

  • out_dir – the directory used for files created by the EnergyPlus simulation.

  • err_dir – the directory where files from a failed run are stored.

  • error_mode – One of {‘Failfast’, ‘Silent’, ‘Print’}. Failfast: Any error aborts the evaluation. Silent: Evaluation will return the error_value for any lists of values that raise an error. Print: Same as silent, but warnings are printed to stderr for any errors.

  • error_value – The value of the evaluation if an error occurs. Incompatible with error_mode=’Failfast’.

  • version – Deprecated

  • progress_bar – whether or not to display a progress bar

  • epw_file – Deprecated. Use epw instead. Path to the epw file representing the weather.

df_apply(df: DataFrame, keep_input=False, processes: int = 1, keep_dirs: bool = False, *, out_dir=None, stdout_mode='Silent', **kwargs) DataFrame[source]

Applies this evaluator to an entire dataFrame, row by row.

Parameters:
  • df – a DataFrame where each row represents valid input values for this Evaluator.

  • keep_input – whether to include the input data in the returned DataFrame

  • processes – amount of cores to use

  • keep_dirs – whether or not keep output directory

  • stdout_mode – Stdout mode selection. One of {“Silent”, “Verbose”} raise warning otherwise. Silent: EnergyPlus stdout output is suppressed. This is the default. Verbose: EnergyPlus output is printed to stdout.

Returns:

Returns a DataFrame with one column containing the results for each objective.

eval_single(values: Sequence, out_dir=None, keep_dirs=False, **kwargs)[source]

Returns the objective results for a single list of parameter values.

Parameters:
  • values – A list of values to set each parameter to, in the same order as this evaluator’s inputs

  • kwargs – Any keyword arguments

Returns:

a tuple of the objectives and constraints

generate_building(df: DataFrame, index: int, file_name: str) None[source]

generate idf file

Parameters:
  • df – dataFrame of the select row.

  • index – start point.

  • file_name – file name used to save as.

Returns:

None

class evaluator.EvaluatorGeneric(evaluation_func: Callable[[Sequence], Tuple[float, ...]], problem: Problem, error_mode: str = 'Failfast', error_value: Optional[Sequence] = None, progress_bar: bool = True)[source]

Generic Evaluator

This evaluator is a wrapper around a evaluation function. Can be useful for quick debugging.

Parameters:
  • evaluation_func – a function that takes as input an list of values, and gives as output a tuple of the objective values for that point in the solution space

  • problem – description of the inputs and outputs the evaluator will use

  • progress_bar – whether or not to display a progress bar

eval_single(values: Sequence) Sequence[source]

Returns the objective results for a single list of parameter values.

Parameters:
  • values – A list of values to set each parameter to, in the same order as this evaluator’s inputs

  • kwargs – Any keyword arguments

Returns:

a tuple of the objectives and constraints

class evaluator.EvaluatorSR(*args, **kwargs)[source]

Surrogate Model Evaluator

This evaluator has been replaced by EvaluatorGeneric, will be removed in a future release.

Deprecated since version 1.6.0: EvaluatorSR has been renamed as EvaluatorGeneric with same functionality.

Parameters:
  • evaluation_func – a function that takes as input an list of values, and gives as output a tuple of the objective values for that point in the solution space

  • problem – description of the inputs and outputs the evaluator will use

  • progress_bar – whether or not to display a progress bar