Scaleout Edge Clients
This section documents the client implementations available for Scaleout Edge. Clients run on edge nodes and communicate with the Scaleout Edge network to perform local training, evaluation, and model exchange. Multiple client implementations exist to support different environments and programming languages.
Python Client
The Python client provides a high-level API for integrating local training code with the Scaleout Edge network. It is suitable for servers, development machines, notebooks, and lightweight edge nodes.
- class scaleout.EdgeClient(train_callback: Callable[[ScaleoutModel, Dict], Tuple[ScaleoutModel | None, Dict]] | None = None, validate_callback: Callable[[ScaleoutModel], Dict] | None = None, predict_callback: Callable[[ScaleoutModel], Dict] | None = None)[source]
Bases:
objectClient for interacting with the Scaleout network.
- __init__(train_callback: Callable[[ScaleoutModel, Dict], Tuple[ScaleoutModel | None, Dict]] | None = None, validate_callback: Callable[[ScaleoutModel], Dict] | None = None, predict_callback: Callable[[ScaleoutModel], Dict] | None = None) None[source]
Initialize the EdgeClient.
- check_task_abort() None[source]
Check if the ongoing task has been aborted.
This function should be called periodically from the task callback to ensure that the task can be interrupted if needed. If called from a thread that do not run the task, this function is a no-op.
- Raises:
StoppedException – If the task was aborted.
- connect_to_api(url: str, json: dict = None, token: str | None = None, token_refresh_callback: Callable[[str, str, datetime], None] | None = None) Tuple[ConnectToApiResult, Any][source]
Connect to the Scaleout API. Accepts a refresh token, instantiates TokenManager, and uses access token.
- property current_logging_context: LoggingContext | None
Get the current logging context for the running thread.
- get_model_from_combiner(model_id: str) ScaleoutModel[source]
Get the model from the combiner.
- init_grpchandler(config: GrpcConnectionOptions, token: str | None = None, url: str | None = None, token_refresh_callback: Callable[[str, str, datetime], None] | None = None) bool[source]
Initialize the GRPC handler. Accepts a refresh token, instantiates TokenManager, and uses access token.
- log_attributes(attributes: dict, check_task_abort: bool = True) bool[source]
Log the attributes to the server.
- log_metric(metrics: dict, step: int = None, commit: bool = True, check_task_abort=True, context: LoggingContext = None) bool[source]
Log the metrics to the server.
- Parameters:
metrics (dict) – The metrics to log.
step (int, optional) – The step number.
value. (If provided the context step will be set to this)
provided (If not)
used. (the step from the context will be)
commit (bool, optional) – Whether or not to increment the step. Defaults to True.
check_task_abort (bool, optional) – Whether or not to check for task abort. Defaults to True.
context (LoggingContext, optional) – The logging context to use. Defaults to None, which uses the current context.
- Returns:
True if the metrics were logged successfully, False otherwise.
- Return type:
- log_telemetry(telemetry: dict, check_task_abort: bool = True) bool[source]
Log the telemetry data to the server.
- predict_global_model(request: scaleoututil.grpc.scaleout_pb2.TaskRequest) None[source]
Predict using the global model.
- send_model_to_combiner(model: ScaleoutModel) scaleoututil.grpc.scaleout_pb2.ModelResponse[source]
Send the model to the combiner.
- send_status(msg: str, log_level: scaleoututil.grpc.scaleout_pb2.LogLevel = scaleoututil.grpc.scaleout_pb2.LogLevel.INFO, type: str | None = None) None[source]
Send the status.
- set_custom_callback(callback_name: str, callback: Callable[[scaleoututil.grpc.scaleout_pb2.TaskRequest], Dict]) None[source]
Set a custom task callback.
- class scaleout.ScaleoutModel[source]
Bases:
objectThe ScaleoutModel class is the primary model representation in the Scaleout framework. A ScaleoutModel object contains a data object (tempfile.SpooledTemporaryFile) that holds the model parameters. The model parameters dict can be extracted from the data object or be used to create a model object. Unpacking of the model parameters is done by the helper which needs to be provided either to the the class or to the method
- static from_file(file_path: str) ScaleoutModel[source]
Creates a ScaleoutModel from a file.
- static from_filechunk_stream(filechunk_stream: Iterable[scaleoututil.grpc.scaleout_pb2.FileChunk]) ScaleoutModel[source]
Creates a ScaleoutModel from a filechunk stream.
- static from_model_params(model_params: dict, helper=None) ScaleoutModel[source]
Creates a ScaleoutModel from model parameters.
- static from_stream(stream: BinaryIO) ScaleoutModel[source]
Creates a ScaleoutModel from a stream.
- get_filechunk_stream(chunk_size=1048576)[source]
Returns a generator that yields chunks of the model data.
- get_stream()[source]
Returns a stream of the model data.
To avoid concurrency issues, a new stream is created each time this method is called.
Additional Clients
Support for additional client implementations (e.g., C++ and Kotlin) will be included here in future versions of the documentation.