Modules

Dataset object

Torch object to load the data to feeed the QuantumDeepField model.

class datasets.QDFDataset(directory: str)

Preprocessing

Collection of utility functions for prerocessing the data.

preprocess.create_dataset(dir_dataset, filename, basis_set, radius, grid_interval, orbital_dict, property=True)

Directory of a preprocessed dataset.

preprocess.create_distancematrix(coords1, coords2)

Create the distance matrix from coords1 and coords2, where coords = [[x_1, y_1, z_1], [x_2, y_2, z_2], …]. For example, when coords1 is field_coords and coords2 is atomic_coords of a molecule, each element of the matrix is the distance between a field point and an atomic position in the molecule. Note that we transform all 0 elements in the distance matrix into a large value (e.g., 1e6) because we use the Gaussian: exp(-d^2), where d is the distance, and exp(-1e6^2) becomes 0.

preprocess.create_field(sphere, coords)

Create the grid field of a molecule.

preprocess.create_orbitals(orbitals, orbital_dict)

Transform the atomic orbital types (e.g., H1s, C1s, N2s, and O2p) into the indices (e.g., H1s=0, C1s=1, N2s=2, and O2p=3) using orbital_dict.

preprocess.create_potential(distance_matrix, atomic_numbers)

Create the Gaussian external potential used in Brockherde et al., 2017, Bypassing the Kohn-Sham equations with machine learning.

preprocess.create_sphere(radius: float, grid_interval: float)

Create the sphere to be placed on each atom of a molecule.

Training/Testing wrappers

Wtraping objects for training/testing the QuantumDeepField model.

class wrappers.Tester(model: Module)

Tester object wrapper. This object wraps the testing loop for the model. :param torch.nn.Module model: Torch loaded model to be trained.

save_model(model: Module, filename: str) None

Saves the model in the filename given, Wraps torc.save method to save ONLY the state dictionary. :param torch.nn.Module model: Torch model for which the state dictionary is to be saved. :param str filename: name of the file where the torch state dictionary is to be saved.

save_prediction(prediction: str, filename: str) None

Saves the prediction summary into a file specified by filename parameter. Note that if the file exists the new results will OVERWRITE the previous predictions.

Parameters:
  • prediction (str) – string summary of predictions to be dumped into a file.

  • filename (str) – string filename for the file where the predictions are to be saved.

save_result(result: str, filename: str) None

Saves the results summary into a file specified by filename parameter. Note that if the file exists the new results will be appended.

Parameters:
  • result (str) – string summary of results to be dumped into a file.

  • filename (str) – string filename for the file where the results are to be saved.

test(dataloader: DataLoader, time: bool = False) tuple

Main method of the Test wrapper object. Given the datalaoder object it performs a single pass on the whole dataloader (an epoch). For every batch of the data it computes the loss on the target property (target=’E’, supervised).

Parameters:
  • dataloader (torch.utils.data.DataLoader) – Dataloader object containing all the testing samples.

  • time (bool) – Boolean flag to be set ofor the option of timing the code execution (Defaults to false).

Returns:

Mean Absolute Error computed on the test set given and a summary string with the predicitons.

Return type:

tuple

class wrappers.Trainer(model: Module, lr: float, lr_decay: float, step_size: int)

Trainer object wrapper. This object wraps the training loop for the model. It uses Adam optimizer and a exponential scheduler for the learning rate decay. That is LR_{t} = LR_{t-step_size}**{lr_decay}. :param torch.nn.Module model: Torch loaded model to be trained. :param float lr: Learning rate hyperparameter value. :param float lr_decay: Multiplicative factor of learning rate decay hyperparameter. :param int step_size: Period of activation of learning rate decay hyperparameter.

optimize(loss: tensor, optimizer: Adam) None

Wrapping function for pytorch’s routine of zeroing the gradients, computing the loss backpropagation and updating the optimizer with the new gradients. :param torch.tensor loss: Torch.tensors with the copmuted gradients on the loss. :param torch.optim.Adam optimizer: Wrapped optimzer to zero the gradients and update with ne backpropagation.

train(dataloader: DataLoader) tuple

Main method of the Train wrapper object. GIven the datalaoder object it performs a single pass on the whole dataloader (an epoch). For every batch of the data it computes first the loss on the target property (target=’E’, supervised) optimizes it, then computes the loss on the potentials (target=’V’, unsupervised) and optimizes again. Minimizes two loss functions in terms of E and V.

Parameters:

dataloader (torch.utils.data.DataLoader) – Dataloader object containing all the training samples.

Returns:

The total losses on the supervised target and total losses on the unsupervised objective V for the whole epoch.

Return type:

tuple

QuantumDeepField