PropertyEstimatorServer

class propertyestimator.server.PropertyEstimatorServer(calculation_backend, storage_backend, port=8000, working_directory='working-data')[source]

The object responsible for coordinating all properties estimations to to be ran using the property estimator, in addition to deciding at which fidelity a property will be calculated.

It acts as a server, which receives submitted jobs from clients launched via the property estimator.

Warning

This class is still heavily under development and is subject to rapid changes.

Notes

Methods to handle the TCP messages are based on the StackOverflow response from A. Jesse Jiryu Davis: https://stackoverflow.com/a/40257248

Examples

Setting up a general server instance using a dask LocalCluster backend:

>>> # Create the backend which will be responsible for distributing the calculations
>>> from propertyestimator.backends import DaskLocalCluster, ComputeResources
>>> calculation_backend = DaskLocalCluster(1)
>>>
>>> # Calculate the backend which will be responsible for storing and retrieving
>>> # the data from previous calculations
>>> from propertyestimator.storage import LocalFileStorage
>>> storage_backend = LocalFileStorage()
>>>
>>> # Create the server to which all estimation requests will be submitted
>>> from propertyestimator.server import PropertyEstimatorServer
>>> property_server = PropertyEstimatorServer(calculation_backend, storage_backend)
>>>
>>> # Instruct the server to listen for incoming requests
>>> property_server.start_listening_loop()
__init__(calculation_backend, storage_backend, port=8000, working_directory='working-data')[source]

Constructs a new PropertyEstimatorServer object.

Parameters
  • calculation_backend (PropertyEstimatorBackend) – The backend to use for executing calculations.

  • storage_backend (PropertyEstimatorStorage) – The backend to use for storing information from any calculations.

  • port (int) – The port on which to listen for incoming client requests.

  • working_directory (str) – The local directory in which to store all local, temporary calculation data.

Methods

__init__(calculation_backend, storage_backend)

Constructs a new PropertyEstimatorServer object.

add_socket(socket)

Singular version of add_sockets.

add_sockets(sockets)

Makes this server start accepting connections on the given sockets.

bind(port[, address, family, backlog, …])

Binds this server to the given port on the given address.

handle_stream(stream, address)

A routine to handle incoming requests from a property estimator TCP client.

listen(port[, address])

Starts accepting connections on the given port.

start([num_processes])

Starts this server in the .IOLoop.

start_listening_loop()

Starts the main (blocking) server IOLoop which will run until the user kills the process.

stop()

Stops the property calculation server and it’s provided backend.

class ServerEstimationRequest(estimation_id='', queued_properties=None, options=None, force_field_id=None, parameter_gradient_keys=None)[source]

Represents a request for the server to estimate a set of properties. Such requests are expected to only estimate properties for a single system (e.g. fixed components in a fixed ratio)

json()

Creates a JSON representation of this class.

Returns

The JSON representation of this class.

Return type

str

classmethod parse_json(string_contents, encoding='utf8')

Parses a typed json string into the corresponding class structure.

Parameters
  • string_contents (str or bytes) – The typed json string.

  • encoding (str) – The encoding of the string_contents.

Returns

The parsed class.

Return type

Any

async handle_stream(stream, address)[source]

A routine to handle incoming requests from a property estimator TCP client.

Notes

This method is based on the StackOverflow response from A. Jesse Jiryu Davis: https://stackoverflow.com/a/40257248

Parameters
  • stream (IOStream) – An IO stream used to pass messages between the server and client.

  • address (str) – The address from which the request came.

start_listening_loop()[source]

Starts the main (blocking) server IOLoop which will run until the user kills the process.

stop()[source]

Stops the property calculation server and it’s provided backend.

add_socket(socket)

Singular version of add_sockets. Takes a single socket object.

add_sockets(sockets)

Makes this server start accepting connections on the given sockets.

The sockets parameter is a list of socket objects such as those returned by ~tornado.netutil.bind_sockets. add_sockets is typically used in combination with that method and tornado.process.fork_processes to provide greater control over the initialization of a multi-process server.

bind(port, address=None, family=<AddressFamily.AF_UNSPEC: 0>, backlog=128, reuse_port=False)

Binds this server to the given port on the given address.

To start the server, call start. If you want to run this server in a single process, you can call listen as a shortcut to the sequence of bind and start calls.

Address may be either an IP address or hostname. If it’s a hostname, the server will listen on all IP addresses associated with the name. Address may be an empty string or None to listen on all available interfaces. Family may be set to either socket.AF_INET or socket.AF_INET6 to restrict to IPv4 or IPv6 addresses, otherwise both will be used if available.

The backlog argument has the same meaning as for socket.listen <socket.socket.listen>. The reuse_port argument has the same meaning as for .bind_sockets.

This method may be called multiple times prior to start to listen on multiple ports or interfaces.

Changed in version 4.4: Added the reuse_port argument.

listen(port, address='')

Starts accepting connections on the given port.

This method may be called more than once to listen on multiple ports. listen takes effect immediately; it is not necessary to call TCPServer.start afterwards. It is, however, necessary to start the .IOLoop.

start(num_processes=1)

Starts this server in the .IOLoop.

By default, we run the server in this process and do not fork any additional child process.

If num_processes is None or <= 0, we detect the number of cores available on this machine and fork that number of child processes. If num_processes is given and > 1, we fork that specific number of sub-processes.

Since we use processes and not threads, there is no shared memory between any server code.

Note that multiple processes are not compatible with the autoreload module (or the autoreload=True option to tornado.web.Application which defaults to True when debug=True). When using multiple processes, no IOLoops can be created or referenced until after the call to TCPServer.start(n).