1. Packages
  2. Google Cloud Native
  3. API Docs
  4. ml
  5. ml/v1
  6. getJob

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.ml/v1.getJob

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Describes a job.

Using getJob

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getJob(args: GetJobArgs, opts?: InvokeOptions): Promise<GetJobResult>
function getJobOutput(args: GetJobOutputArgs, opts?: InvokeOptions): Output<GetJobResult>
Copy
def get_job(job_id: Optional[str] = None,
            project: Optional[str] = None,
            opts: Optional[InvokeOptions] = None) -> GetJobResult
def get_job_output(job_id: Optional[pulumi.Input[str]] = None,
            project: Optional[pulumi.Input[str]] = None,
            opts: Optional[InvokeOptions] = None) -> Output[GetJobResult]
Copy
func LookupJob(ctx *Context, args *LookupJobArgs, opts ...InvokeOption) (*LookupJobResult, error)
func LookupJobOutput(ctx *Context, args *LookupJobOutputArgs, opts ...InvokeOption) LookupJobResultOutput
Copy

> Note: This function is named LookupJob in the Go SDK.

public static class GetJob 
{
    public static Task<GetJobResult> InvokeAsync(GetJobArgs args, InvokeOptions? opts = null)
    public static Output<GetJobResult> Invoke(GetJobInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetJobResult> getJob(GetJobArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
Copy
fn::invoke:
  function: google-native:ml/v1:getJob
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

JobId This property is required. string
Project string
JobId This property is required. string
Project string
jobId This property is required. String
project String
jobId This property is required. string
project string
job_id This property is required. str
project str
jobId This property is required. String
project String

getJob Result

The following output properties are available:

CreateTime string
When the job was created.
EndTime string
When the job processing was completed.
ErrorMessage string
The details of a failure or a cancellation.
Etag string
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a job from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform job updates in order to avoid race conditions: An etag is returned in the response to GetJob, and systems are expected to put that etag in the request to UpdateJob to ensure that their change will be applied to the same version of the job.
JobId string
The user-specified id of the job.
JobPosition string
It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled.
Labels Dictionary<string, string>
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels.
PredictionInput Pulumi.GoogleNative.Ml.V1.Outputs.GoogleCloudMlV1__PredictionInputResponse
Input parameters to create a prediction job.
PredictionOutput Pulumi.GoogleNative.Ml.V1.Outputs.GoogleCloudMlV1__PredictionOutputResponse
The current prediction job result.
StartTime string
When the job processing was started.
State string
The detailed state of a job.
TrainingInput Pulumi.GoogleNative.Ml.V1.Outputs.GoogleCloudMlV1__TrainingInputResponse
Input parameters to create a training job.
TrainingOutput Pulumi.GoogleNative.Ml.V1.Outputs.GoogleCloudMlV1__TrainingOutputResponse
The current training job result.
CreateTime string
When the job was created.
EndTime string
When the job processing was completed.
ErrorMessage string
The details of a failure or a cancellation.
Etag string
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a job from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform job updates in order to avoid race conditions: An etag is returned in the response to GetJob, and systems are expected to put that etag in the request to UpdateJob to ensure that their change will be applied to the same version of the job.
JobId string
The user-specified id of the job.
JobPosition string
It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled.
Labels map[string]string
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels.
PredictionInput GoogleCloudMlV1__PredictionInputResponse
Input parameters to create a prediction job.
PredictionOutput GoogleCloudMlV1__PredictionOutputResponse
The current prediction job result.
StartTime string
When the job processing was started.
State string
The detailed state of a job.
TrainingInput GoogleCloudMlV1__TrainingInputResponse
Input parameters to create a training job.
TrainingOutput GoogleCloudMlV1__TrainingOutputResponse
The current training job result.
createTime String
When the job was created.
endTime String
When the job processing was completed.
errorMessage String
The details of a failure or a cancellation.
etag String
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a job from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform job updates in order to avoid race conditions: An etag is returned in the response to GetJob, and systems are expected to put that etag in the request to UpdateJob to ensure that their change will be applied to the same version of the job.
jobId String
The user-specified id of the job.
jobPosition String
It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled.
labels Map<String,String>
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels.
predictionInput GoogleCloudMlV1__PredictionInputResponse
Input parameters to create a prediction job.
predictionOutput GoogleCloudMlV1__PredictionOutputResponse
The current prediction job result.
startTime String
When the job processing was started.
state String
The detailed state of a job.
trainingInput GoogleCloudMlV1__TrainingInputResponse
Input parameters to create a training job.
trainingOutput GoogleCloudMlV1__TrainingOutputResponse
The current training job result.
createTime string
When the job was created.
endTime string
When the job processing was completed.
errorMessage string
The details of a failure or a cancellation.
etag string
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a job from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform job updates in order to avoid race conditions: An etag is returned in the response to GetJob, and systems are expected to put that etag in the request to UpdateJob to ensure that their change will be applied to the same version of the job.
jobId string
The user-specified id of the job.
jobPosition string
It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled.
labels {[key: string]: string}
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels.
predictionInput GoogleCloudMlV1__PredictionInputResponse
Input parameters to create a prediction job.
predictionOutput GoogleCloudMlV1__PredictionOutputResponse
The current prediction job result.
startTime string
When the job processing was started.
state string
The detailed state of a job.
trainingInput GoogleCloudMlV1__TrainingInputResponse
Input parameters to create a training job.
trainingOutput GoogleCloudMlV1__TrainingOutputResponse
The current training job result.
create_time str
When the job was created.
end_time str
When the job processing was completed.
error_message str
The details of a failure or a cancellation.
etag str
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a job from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform job updates in order to avoid race conditions: An etag is returned in the response to GetJob, and systems are expected to put that etag in the request to UpdateJob to ensure that their change will be applied to the same version of the job.
job_id str
The user-specified id of the job.
job_position str
It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled.
labels Mapping[str, str]
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels.
prediction_input GoogleCloudMlV1PredictionInputResponse
Input parameters to create a prediction job.
prediction_output GoogleCloudMlV1PredictionOutputResponse
The current prediction job result.
start_time str
When the job processing was started.
state str
The detailed state of a job.
training_input GoogleCloudMlV1TrainingInputResponse
Input parameters to create a training job.
training_output GoogleCloudMlV1TrainingOutputResponse
The current training job result.
createTime String
When the job was created.
endTime String
When the job processing was completed.
errorMessage String
The details of a failure or a cancellation.
etag String
etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a job from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform job updates in order to avoid race conditions: An etag is returned in the response to GetJob, and systems are expected to put that etag in the request to UpdateJob to ensure that their change will be applied to the same version of the job.
jobId String
The user-specified id of the job.
jobPosition String
It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled.
labels Map<String>
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels.
predictionInput Property Map
Input parameters to create a prediction job.
predictionOutput Property Map
The current prediction job result.
startTime String
When the job processing was started.
state String
The detailed state of a job.
trainingInput Property Map
Input parameters to create a training job.
trainingOutput Property Map
The current training job result.

Supporting Types

GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse

ObjectiveValue This property is required. double
The objective value at this training step.
TrainingStep This property is required. string
The global training step for this metric.
ObjectiveValue This property is required. float64
The objective value at this training step.
TrainingStep This property is required. string
The global training step for this metric.
objectiveValue This property is required. Double
The objective value at this training step.
trainingStep This property is required. String
The global training step for this metric.
objectiveValue This property is required. number
The objective value at this training step.
trainingStep This property is required. string
The global training step for this metric.
objective_value This property is required. float
The objective value at this training step.
training_step This property is required. str
The global training step for this metric.
objectiveValue This property is required. Number
The objective value at this training step.
trainingStep This property is required. String
The global training step for this metric.

GoogleCloudMlV1__AcceleratorConfigResponse

Count This property is required. string
The number of accelerators to attach to each machine running the job.
Type This property is required. string
The type of accelerator to use.
Count This property is required. string
The number of accelerators to attach to each machine running the job.
Type This property is required. string
The type of accelerator to use.
count This property is required. String
The number of accelerators to attach to each machine running the job.
type This property is required. String
The type of accelerator to use.
count This property is required. string
The number of accelerators to attach to each machine running the job.
type This property is required. string
The type of accelerator to use.
count This property is required. str
The number of accelerators to attach to each machine running the job.
type This property is required. str
The type of accelerator to use.
count This property is required. String
The number of accelerators to attach to each machine running the job.
type This property is required. String
The type of accelerator to use.

GoogleCloudMlV1__BuiltInAlgorithmOutputResponse

Framework This property is required. string
Framework on which the built-in algorithm was trained.
ModelPath This property is required. string
The Cloud Storage path to the model/ directory where the training job saves the trained model. Only set for successful jobs that don't use hyperparameter tuning.
PythonVersion This property is required. string
Python version on which the built-in algorithm was trained.
RuntimeVersion This property is required. string
AI Platform runtime version on which the built-in algorithm was trained.
Framework This property is required. string
Framework on which the built-in algorithm was trained.
ModelPath This property is required. string
The Cloud Storage path to the model/ directory where the training job saves the trained model. Only set for successful jobs that don't use hyperparameter tuning.
PythonVersion This property is required. string
Python version on which the built-in algorithm was trained.
RuntimeVersion This property is required. string
AI Platform runtime version on which the built-in algorithm was trained.
framework This property is required. String
Framework on which the built-in algorithm was trained.
modelPath This property is required. String
The Cloud Storage path to the model/ directory where the training job saves the trained model. Only set for successful jobs that don't use hyperparameter tuning.
pythonVersion This property is required. String
Python version on which the built-in algorithm was trained.
runtimeVersion This property is required. String
AI Platform runtime version on which the built-in algorithm was trained.
framework This property is required. string
Framework on which the built-in algorithm was trained.
modelPath This property is required. string
The Cloud Storage path to the model/ directory where the training job saves the trained model. Only set for successful jobs that don't use hyperparameter tuning.
pythonVersion This property is required. string
Python version on which the built-in algorithm was trained.
runtimeVersion This property is required. string
AI Platform runtime version on which the built-in algorithm was trained.
framework This property is required. str
Framework on which the built-in algorithm was trained.
model_path This property is required. str
The Cloud Storage path to the model/ directory where the training job saves the trained model. Only set for successful jobs that don't use hyperparameter tuning.
python_version This property is required. str
Python version on which the built-in algorithm was trained.
runtime_version This property is required. str
AI Platform runtime version on which the built-in algorithm was trained.
framework This property is required. String
Framework on which the built-in algorithm was trained.
modelPath This property is required. String
The Cloud Storage path to the model/ directory where the training job saves the trained model. Only set for successful jobs that don't use hyperparameter tuning.
pythonVersion This property is required. String
Python version on which the built-in algorithm was trained.
runtimeVersion This property is required. String
AI Platform runtime version on which the built-in algorithm was trained.

GoogleCloudMlV1__DiskConfigResponse

BootDiskSizeGb This property is required. int
Size in GB of the boot disk (default is 100GB).
BootDiskType This property is required. string
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
BootDiskSizeGb This property is required. int
Size in GB of the boot disk (default is 100GB).
BootDiskType This property is required. string
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
bootDiskSizeGb This property is required. Integer
Size in GB of the boot disk (default is 100GB).
bootDiskType This property is required. String
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
bootDiskSizeGb This property is required. number
Size in GB of the boot disk (default is 100GB).
bootDiskType This property is required. string
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
boot_disk_size_gb This property is required. int
Size in GB of the boot disk (default is 100GB).
boot_disk_type This property is required. str
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
bootDiskSizeGb This property is required. Number
Size in GB of the boot disk (default is 100GB).
bootDiskType This property is required. String
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

GoogleCloudMlV1__EncryptionConfigResponse

KmsKeyName This property is required. string
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}
KmsKeyName This property is required. string
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}
kmsKeyName This property is required. String
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}
kmsKeyName This property is required. string
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}
kms_key_name This property is required. str
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}
kmsKeyName This property is required. String
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}

GoogleCloudMlV1__HyperparameterOutputResponse

AllMetrics This property is required. List<Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse>
All recorded object metrics for this trial. This field is not currently populated.
BuiltInAlgorithmOutput This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded.
EndTime This property is required. string
End time for the trial.
FinalMetric This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse
The final objective metric seen for this trial.
Hyperparameters This property is required. Dictionary<string, string>
The hyperparameters given to this trial.
IsTrialStoppedEarly This property is required. bool
True if the trial is stopped early.
StartTime This property is required. string
Start time for the trial.
State This property is required. string
The detailed state of the trial.
TrialId This property is required. string
The trial id for these results.
WebAccessUris This property is required. Dictionary<string, string>
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
AllMetrics This property is required. []GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse
All recorded object metrics for this trial. This field is not currently populated.
BuiltInAlgorithmOutput This property is required. GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded.
EndTime This property is required. string
End time for the trial.
FinalMetric This property is required. GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse
The final objective metric seen for this trial.
Hyperparameters This property is required. map[string]string
The hyperparameters given to this trial.
IsTrialStoppedEarly This property is required. bool
True if the trial is stopped early.
StartTime This property is required. string
Start time for the trial.
State This property is required. string
The detailed state of the trial.
TrialId This property is required. string
The trial id for these results.
WebAccessUris This property is required. map[string]string
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
allMetrics This property is required. List<GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse>
All recorded object metrics for this trial. This field is not currently populated.
builtInAlgorithmOutput This property is required. GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded.
endTime This property is required. String
End time for the trial.
finalMetric This property is required. GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse
The final objective metric seen for this trial.
hyperparameters This property is required. Map<String,String>
The hyperparameters given to this trial.
isTrialStoppedEarly This property is required. Boolean
True if the trial is stopped early.
startTime This property is required. String
Start time for the trial.
state This property is required. String
The detailed state of the trial.
trialId This property is required. String
The trial id for these results.
webAccessUris This property is required. Map<String,String>
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
allMetrics This property is required. GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse[]
All recorded object metrics for this trial. This field is not currently populated.
builtInAlgorithmOutput This property is required. GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded.
endTime This property is required. string
End time for the trial.
finalMetric This property is required. GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetricResponse
The final objective metric seen for this trial.
hyperparameters This property is required. {[key: string]: string}
The hyperparameters given to this trial.
isTrialStoppedEarly This property is required. boolean
True if the trial is stopped early.
startTime This property is required. string
Start time for the trial.
state This property is required. string
The detailed state of the trial.
trialId This property is required. string
The trial id for these results.
webAccessUris This property is required. {[key: string]: string}
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
all_metrics This property is required. Sequence[GoogleCloudMlV1HyperparameterOutput_HyperparameterMetricResponse]
All recorded object metrics for this trial. This field is not currently populated.
built_in_algorithm_output This property is required. GoogleCloudMlV1BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded.
end_time This property is required. str
End time for the trial.
final_metric This property is required. GoogleCloudMlV1HyperparameterOutput_HyperparameterMetricResponse
The final objective metric seen for this trial.
hyperparameters This property is required. Mapping[str, str]
The hyperparameters given to this trial.
is_trial_stopped_early This property is required. bool
True if the trial is stopped early.
start_time This property is required. str
Start time for the trial.
state This property is required. str
The detailed state of the trial.
trial_id This property is required. str
The trial id for these results.
web_access_uris This property is required. Mapping[str, str]
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
allMetrics This property is required. List<Property Map>
All recorded object metrics for this trial. This field is not currently populated.
builtInAlgorithmOutput This property is required. Property Map
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded.
endTime This property is required. String
End time for the trial.
finalMetric This property is required. Property Map
The final objective metric seen for this trial.
hyperparameters This property is required. Map<String>
The hyperparameters given to this trial.
isTrialStoppedEarly This property is required. Boolean
True if the trial is stopped early.
startTime This property is required. String
Start time for the trial.
state This property is required. String
The detailed state of the trial.
trialId This property is required. String
The trial id for these results.
webAccessUris This property is required. Map<String>
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.

GoogleCloudMlV1__HyperparameterSpecResponse

Algorithm This property is required. string
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified.
EnableTrialEarlyStopping This property is required. bool
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping.
Goal This property is required. string
The type of goal to use for tuning. Available types are MAXIMIZE and MINIMIZE. Defaults to MAXIMIZE.
HyperparameterMetricTag This property is required. string
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
MaxFailedTrials This property is required. int
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail.
MaxParallelTrials This property is required. int
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
MaxTrials This property is required. int
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
Params This property is required. List<Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ParameterSpecResponse>
The set of parameters to tune.
ResumePreviousJobId This property is required. string
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study.
Algorithm This property is required. string
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified.
EnableTrialEarlyStopping This property is required. bool
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping.
Goal This property is required. string
The type of goal to use for tuning. Available types are MAXIMIZE and MINIMIZE. Defaults to MAXIMIZE.
HyperparameterMetricTag This property is required. string
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
MaxFailedTrials This property is required. int
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail.
MaxParallelTrials This property is required. int
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
MaxTrials This property is required. int
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
Params This property is required. []GoogleCloudMlV1__ParameterSpecResponse
The set of parameters to tune.
ResumePreviousJobId This property is required. string
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study.
algorithm This property is required. String
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified.
enableTrialEarlyStopping This property is required. Boolean
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping.
goal This property is required. String
The type of goal to use for tuning. Available types are MAXIMIZE and MINIMIZE. Defaults to MAXIMIZE.
hyperparameterMetricTag This property is required. String
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
maxFailedTrials This property is required. Integer
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail.
maxParallelTrials This property is required. Integer
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
maxTrials This property is required. Integer
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
params This property is required. List<GoogleCloudMlV1__ParameterSpecResponse>
The set of parameters to tune.
resumePreviousJobId This property is required. String
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study.
algorithm This property is required. string
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified.
enableTrialEarlyStopping This property is required. boolean
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping.
goal This property is required. string
The type of goal to use for tuning. Available types are MAXIMIZE and MINIMIZE. Defaults to MAXIMIZE.
hyperparameterMetricTag This property is required. string
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
maxFailedTrials This property is required. number
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail.
maxParallelTrials This property is required. number
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
maxTrials This property is required. number
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
params This property is required. GoogleCloudMlV1__ParameterSpecResponse[]
The set of parameters to tune.
resumePreviousJobId This property is required. string
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study.
algorithm This property is required. str
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified.
enable_trial_early_stopping This property is required. bool
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping.
goal This property is required. str
The type of goal to use for tuning. Available types are MAXIMIZE and MINIMIZE. Defaults to MAXIMIZE.
hyperparameter_metric_tag This property is required. str
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
max_failed_trials This property is required. int
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail.
max_parallel_trials This property is required. int
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
max_trials This property is required. int
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
params This property is required. Sequence[GoogleCloudMlV1ParameterSpecResponse]
The set of parameters to tune.
resume_previous_job_id This property is required. str
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study.
algorithm This property is required. String
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified.
enableTrialEarlyStopping This property is required. Boolean
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping.
goal This property is required. String
The type of goal to use for tuning. Available types are MAXIMIZE and MINIMIZE. Defaults to MAXIMIZE.
hyperparameterMetricTag This property is required. String
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
maxFailedTrials This property is required. Number
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail.
maxParallelTrials This property is required. Number
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
maxTrials This property is required. Number
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
params This property is required. List<Property Map>
The set of parameters to tune.
resumePreviousJobId This property is required. String
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study.

GoogleCloudMlV1__ParameterSpecResponse

CategoricalValues This property is required. List<string>
Required if type is CATEGORICAL. The list of possible categories.
DiscreteValues This property is required. List<double>
Required if type is DISCRETE. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
MaxValue This property is required. double
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
MinValue This property is required. double
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
ParameterName This property is required. string
The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
ScaleType This property is required. string
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
Type This property is required. string
The type of the parameter.
CategoricalValues This property is required. []string
Required if type is CATEGORICAL. The list of possible categories.
DiscreteValues This property is required. []float64
Required if type is DISCRETE. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
MaxValue This property is required. float64
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
MinValue This property is required. float64
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
ParameterName This property is required. string
The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
ScaleType This property is required. string
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
Type This property is required. string
The type of the parameter.
categoricalValues This property is required. List<String>
Required if type is CATEGORICAL. The list of possible categories.
discreteValues This property is required. List<Double>
Required if type is DISCRETE. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
maxValue This property is required. Double
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
minValue This property is required. Double
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
parameterName This property is required. String
The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
scaleType This property is required. String
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
type This property is required. String
The type of the parameter.
categoricalValues This property is required. string[]
Required if type is CATEGORICAL. The list of possible categories.
discreteValues This property is required. number[]
Required if type is DISCRETE. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
maxValue This property is required. number
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
minValue This property is required. number
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
parameterName This property is required. string
The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
scaleType This property is required. string
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
type This property is required. string
The type of the parameter.
categorical_values This property is required. Sequence[str]
Required if type is CATEGORICAL. The list of possible categories.
discrete_values This property is required. Sequence[float]
Required if type is DISCRETE. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
max_value This property is required. float
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
min_value This property is required. float
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
parameter_name This property is required. str
The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
scale_type This property is required. str
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
type This property is required. str
The type of the parameter.
categoricalValues This property is required. List<String>
Required if type is CATEGORICAL. The list of possible categories.
discreteValues This property is required. List<Number>
Required if type is DISCRETE. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
maxValue This property is required. Number
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
minValue This property is required. Number
Required if type is DOUBLE or INTEGER. This field should be unset if type is CATEGORICAL. This value should be integers if type is INTEGER.
parameterName This property is required. String
The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
scaleType This property is required. String
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
type This property is required. String
The type of the parameter.

GoogleCloudMlV1__PredictionInputResponse

BatchSize This property is required. string
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.
DataFormat This property is required. string
The format of the input data files.
InputPaths This property is required. List<string>
The Cloud Storage location of the input data files. May contain wildcards.
MaxWorkerCount This property is required. string
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
ModelName This property is required. string
Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"
OutputDataFormat This property is required. string
Optional. Format of the output data files, defaults to JSON.
OutputPath This property is required. string
The output Google Cloud Storage location.
Region This property is required. string
The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.
RuntimeVersion This property is required. string
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
SignatureName This property is required. string
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".
Uri This property is required. string
Use this field if you want to specify a Google Cloud Storage path for the model to use.
VersionName This property is required. string
Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"
BatchSize This property is required. string
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.
DataFormat This property is required. string
The format of the input data files.
InputPaths This property is required. []string
The Cloud Storage location of the input data files. May contain wildcards.
MaxWorkerCount This property is required. string
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
ModelName This property is required. string
Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"
OutputDataFormat This property is required. string
Optional. Format of the output data files, defaults to JSON.
OutputPath This property is required. string
The output Google Cloud Storage location.
Region This property is required. string
The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.
RuntimeVersion This property is required. string
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
SignatureName This property is required. string
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".
Uri This property is required. string
Use this field if you want to specify a Google Cloud Storage path for the model to use.
VersionName This property is required. string
Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"
batchSize This property is required. String
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.
dataFormat This property is required. String
The format of the input data files.
inputPaths This property is required. List<String>
The Cloud Storage location of the input data files. May contain wildcards.
maxWorkerCount This property is required. String
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
modelName This property is required. String
Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"
outputDataFormat This property is required. String
Optional. Format of the output data files, defaults to JSON.
outputPath This property is required. String
The output Google Cloud Storage location.
region This property is required. String
The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.
runtimeVersion This property is required. String
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
signatureName This property is required. String
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".
uri This property is required. String
Use this field if you want to specify a Google Cloud Storage path for the model to use.
versionName This property is required. String
Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"
batchSize This property is required. string
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.
dataFormat This property is required. string
The format of the input data files.
inputPaths This property is required. string[]
The Cloud Storage location of the input data files. May contain wildcards.
maxWorkerCount This property is required. string
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
modelName This property is required. string
Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"
outputDataFormat This property is required. string
Optional. Format of the output data files, defaults to JSON.
outputPath This property is required. string
The output Google Cloud Storage location.
region This property is required. string
The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.
runtimeVersion This property is required. string
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
signatureName This property is required. string
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".
uri This property is required. string
Use this field if you want to specify a Google Cloud Storage path for the model to use.
versionName This property is required. string
Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"
batch_size This property is required. str
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.
data_format This property is required. str
The format of the input data files.
input_paths This property is required. Sequence[str]
The Cloud Storage location of the input data files. May contain wildcards.
max_worker_count This property is required. str
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
model_name This property is required. str
Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"
output_data_format This property is required. str
Optional. Format of the output data files, defaults to JSON.
output_path This property is required. str
The output Google Cloud Storage location.
region This property is required. str
The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.
runtime_version This property is required. str
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
signature_name This property is required. str
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".
uri This property is required. str
Use this field if you want to specify a Google Cloud Storage path for the model to use.
version_name This property is required. str
Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"
batchSize This property is required. String
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter.
dataFormat This property is required. String
The format of the input data files.
inputPaths This property is required. List<String>
The Cloud Storage location of the input data files. May contain wildcards.
maxWorkerCount This property is required. String
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
modelName This property is required. String
Use this field if you want to use the default version for the specified model. The string must use the following format: "projects/YOUR_PROJECT/models/YOUR_MODEL"
outputDataFormat This property is required. String
Optional. Format of the output data files, defaults to JSON.
outputPath This property is required. String
The output Google Cloud Storage location.
region This property is required. String
The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services.
runtimeVersion This property is required. String
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
signatureName This property is required. String
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default".
uri This property is required. String
Use this field if you want to specify a Google Cloud Storage path for the model to use.
versionName This property is required. String
Use this field if you want to specify a version of the model to use. The string is formatted the same way as model_version, with the addition of the version information: "projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"

GoogleCloudMlV1__PredictionOutputResponse

ErrorCount This property is required. string
The number of data instances which resulted in errors.
NodeHours This property is required. double
Node hours used by the batch prediction job.
OutputPath This property is required. string
The output Google Cloud Storage location provided at the job creation time.
PredictionCount This property is required. string
The number of generated predictions.
ErrorCount This property is required. string
The number of data instances which resulted in errors.
NodeHours This property is required. float64
Node hours used by the batch prediction job.
OutputPath This property is required. string
The output Google Cloud Storage location provided at the job creation time.
PredictionCount This property is required. string
The number of generated predictions.
errorCount This property is required. String
The number of data instances which resulted in errors.
nodeHours This property is required. Double
Node hours used by the batch prediction job.
outputPath This property is required. String
The output Google Cloud Storage location provided at the job creation time.
predictionCount This property is required. String
The number of generated predictions.
errorCount This property is required. string
The number of data instances which resulted in errors.
nodeHours This property is required. number
Node hours used by the batch prediction job.
outputPath This property is required. string
The output Google Cloud Storage location provided at the job creation time.
predictionCount This property is required. string
The number of generated predictions.
error_count This property is required. str
The number of data instances which resulted in errors.
node_hours This property is required. float
Node hours used by the batch prediction job.
output_path This property is required. str
The output Google Cloud Storage location provided at the job creation time.
prediction_count This property is required. str
The number of generated predictions.
errorCount This property is required. String
The number of data instances which resulted in errors.
nodeHours This property is required. Number
Node hours used by the batch prediction job.
outputPath This property is required. String
The output Google Cloud Storage location provided at the job creation time.
predictionCount This property is required. String
The number of generated predictions.

GoogleCloudMlV1__ReplicaConfigResponse

AcceleratorConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__AcceleratorConfigResponse
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training.
ContainerArgs This property is required. List<string>
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
ContainerCommand This property is required. List<string>
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
DiskConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__DiskConfigResponse
Represents the configuration of disk options.
ImageUri This property is required. string
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers.
TpuTfVersion This property is required. string
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow 1.x.y, specify 1.x.
AcceleratorConfig This property is required. GoogleCloudMlV1__AcceleratorConfigResponse
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training.
ContainerArgs This property is required. []string
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
ContainerCommand This property is required. []string
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
DiskConfig This property is required. GoogleCloudMlV1__DiskConfigResponse
Represents the configuration of disk options.
ImageUri This property is required. string
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers.
TpuTfVersion This property is required. string
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow 1.x.y, specify 1.x.
acceleratorConfig This property is required. GoogleCloudMlV1__AcceleratorConfigResponse
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training.
containerArgs This property is required. List<String>
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
containerCommand This property is required. List<String>
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
diskConfig This property is required. GoogleCloudMlV1__DiskConfigResponse
Represents the configuration of disk options.
imageUri This property is required. String
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers.
tpuTfVersion This property is required. String
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow 1.x.y, specify 1.x.
acceleratorConfig This property is required. GoogleCloudMlV1__AcceleratorConfigResponse
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training.
containerArgs This property is required. string[]
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
containerCommand This property is required. string[]
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
diskConfig This property is required. GoogleCloudMlV1__DiskConfigResponse
Represents the configuration of disk options.
imageUri This property is required. string
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers.
tpuTfVersion This property is required. string
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow 1.x.y, specify 1.x.
accelerator_config This property is required. GoogleCloudMlV1AcceleratorConfigResponse
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training.
container_args This property is required. Sequence[str]
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
container_command This property is required. Sequence[str]
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
disk_config This property is required. GoogleCloudMlV1DiskConfigResponse
Represents the configuration of disk options.
image_uri This property is required. str
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers.
tpu_tf_version This property is required. str
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow 1.x.y, specify 1.x.
acceleratorConfig This property is required. Property Map
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training.
containerArgs This property is required. List<String>
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
containerCommand This property is required. List<String>
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time.
diskConfig This property is required. Property Map
Represents the configuration of disk options.
imageUri This property is required. String
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers.
tpuTfVersion This property is required. String
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow 1.x.y, specify 1.x.

GoogleCloudMlV1__SchedulingResponse

MaxRunningTime This property is required. string
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, this field defaults to 604800s (seven days). If the training job is still running after this duration, AI Platform Training cancels it. The duration is measured from when the job enters the RUNNING state; therefore it does not overlap with the duration limited by Scheduling.max_wait_time. For example, if you want to ensure your job runs for no more than 2 hours, set this field to 7200s (2 hours * 60 minutes / hour * 60 seconds / minute). If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxRunningTime: 7200s
MaxWaitTime This property is required. string
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, there is no limit to the wait time. The minimum for this field is 1800s (30 minutes). If the training job has not entered the RUNNING state after this duration, AI Platform Training cancels it. After the job begins running, it can no longer be cancelled due to the maximum wait time. Therefore the duration limited by this field does not overlap with the duration limited by Scheduling.max_running_time. For example, if the job temporarily stops running and retries due to a VM restart, this cannot lead to a maximum wait time cancellation. However, independently of this constraint, AI Platform Training might stop a job if there are too many retries due to exhausted resources in a region. The following example describes how you might use this field: To cancel your job if it doesn't start running within 1 hour, set this field to 3600s (1 hour * 60 minutes / hour * 60 seconds / minute). If the job is still in the QUEUED or PREPARING state after an hour of waiting, AI Platform Training cancels the job. If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxWaitTime: 3600s
Priority This property is required. int
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority.
MaxRunningTime This property is required. string
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, this field defaults to 604800s (seven days). If the training job is still running after this duration, AI Platform Training cancels it. The duration is measured from when the job enters the RUNNING state; therefore it does not overlap with the duration limited by Scheduling.max_wait_time. For example, if you want to ensure your job runs for no more than 2 hours, set this field to 7200s (2 hours * 60 minutes / hour * 60 seconds / minute). If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxRunningTime: 7200s
MaxWaitTime This property is required. string
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, there is no limit to the wait time. The minimum for this field is 1800s (30 minutes). If the training job has not entered the RUNNING state after this duration, AI Platform Training cancels it. After the job begins running, it can no longer be cancelled due to the maximum wait time. Therefore the duration limited by this field does not overlap with the duration limited by Scheduling.max_running_time. For example, if the job temporarily stops running and retries due to a VM restart, this cannot lead to a maximum wait time cancellation. However, independently of this constraint, AI Platform Training might stop a job if there are too many retries due to exhausted resources in a region. The following example describes how you might use this field: To cancel your job if it doesn't start running within 1 hour, set this field to 3600s (1 hour * 60 minutes / hour * 60 seconds / minute). If the job is still in the QUEUED or PREPARING state after an hour of waiting, AI Platform Training cancels the job. If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxWaitTime: 3600s
Priority This property is required. int
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority.
maxRunningTime This property is required. String
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, this field defaults to 604800s (seven days). If the training job is still running after this duration, AI Platform Training cancels it. The duration is measured from when the job enters the RUNNING state; therefore it does not overlap with the duration limited by Scheduling.max_wait_time. For example, if you want to ensure your job runs for no more than 2 hours, set this field to 7200s (2 hours * 60 minutes / hour * 60 seconds / minute). If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxRunningTime: 7200s
maxWaitTime This property is required. String
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, there is no limit to the wait time. The minimum for this field is 1800s (30 minutes). If the training job has not entered the RUNNING state after this duration, AI Platform Training cancels it. After the job begins running, it can no longer be cancelled due to the maximum wait time. Therefore the duration limited by this field does not overlap with the duration limited by Scheduling.max_running_time. For example, if the job temporarily stops running and retries due to a VM restart, this cannot lead to a maximum wait time cancellation. However, independently of this constraint, AI Platform Training might stop a job if there are too many retries due to exhausted resources in a region. The following example describes how you might use this field: To cancel your job if it doesn't start running within 1 hour, set this field to 3600s (1 hour * 60 minutes / hour * 60 seconds / minute). If the job is still in the QUEUED or PREPARING state after an hour of waiting, AI Platform Training cancels the job. If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxWaitTime: 3600s
priority This property is required. Integer
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority.
maxRunningTime This property is required. string
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, this field defaults to 604800s (seven days). If the training job is still running after this duration, AI Platform Training cancels it. The duration is measured from when the job enters the RUNNING state; therefore it does not overlap with the duration limited by Scheduling.max_wait_time. For example, if you want to ensure your job runs for no more than 2 hours, set this field to 7200s (2 hours * 60 minutes / hour * 60 seconds / minute). If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxRunningTime: 7200s
maxWaitTime This property is required. string
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, there is no limit to the wait time. The minimum for this field is 1800s (30 minutes). If the training job has not entered the RUNNING state after this duration, AI Platform Training cancels it. After the job begins running, it can no longer be cancelled due to the maximum wait time. Therefore the duration limited by this field does not overlap with the duration limited by Scheduling.max_running_time. For example, if the job temporarily stops running and retries due to a VM restart, this cannot lead to a maximum wait time cancellation. However, independently of this constraint, AI Platform Training might stop a job if there are too many retries due to exhausted resources in a region. The following example describes how you might use this field: To cancel your job if it doesn't start running within 1 hour, set this field to 3600s (1 hour * 60 minutes / hour * 60 seconds / minute). If the job is still in the QUEUED or PREPARING state after an hour of waiting, AI Platform Training cancels the job. If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxWaitTime: 3600s
priority This property is required. number
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority.
max_running_time This property is required. str
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, this field defaults to 604800s (seven days). If the training job is still running after this duration, AI Platform Training cancels it. The duration is measured from when the job enters the RUNNING state; therefore it does not overlap with the duration limited by Scheduling.max_wait_time. For example, if you want to ensure your job runs for no more than 2 hours, set this field to 7200s (2 hours * 60 minutes / hour * 60 seconds / minute). If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxRunningTime: 7200s
max_wait_time This property is required. str
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, there is no limit to the wait time. The minimum for this field is 1800s (30 minutes). If the training job has not entered the RUNNING state after this duration, AI Platform Training cancels it. After the job begins running, it can no longer be cancelled due to the maximum wait time. Therefore the duration limited by this field does not overlap with the duration limited by Scheduling.max_running_time. For example, if the job temporarily stops running and retries due to a VM restart, this cannot lead to a maximum wait time cancellation. However, independently of this constraint, AI Platform Training might stop a job if there are too many retries due to exhausted resources in a region. The following example describes how you might use this field: To cancel your job if it doesn't start running within 1 hour, set this field to 3600s (1 hour * 60 minutes / hour * 60 seconds / minute). If the job is still in the QUEUED or PREPARING state after an hour of waiting, AI Platform Training cancels the job. If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxWaitTime: 3600s
priority This property is required. int
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority.
maxRunningTime This property is required. String
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, this field defaults to 604800s (seven days). If the training job is still running after this duration, AI Platform Training cancels it. The duration is measured from when the job enters the RUNNING state; therefore it does not overlap with the duration limited by Scheduling.max_wait_time. For example, if you want to ensure your job runs for no more than 2 hours, set this field to 7200s (2 hours * 60 minutes / hour * 60 seconds / minute). If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxRunningTime: 7200s
maxWaitTime This property is required. String
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by s. If not specified, there is no limit to the wait time. The minimum for this field is 1800s (30 minutes). If the training job has not entered the RUNNING state after this duration, AI Platform Training cancels it. After the job begins running, it can no longer be cancelled due to the maximum wait time. Therefore the duration limited by this field does not overlap with the duration limited by Scheduling.max_running_time. For example, if the job temporarily stops running and retries due to a VM restart, this cannot lead to a maximum wait time cancellation. However, independently of this constraint, AI Platform Training might stop a job if there are too many retries due to exhausted resources in a region. The following example describes how you might use this field: To cancel your job if it doesn't start running within 1 hour, set this field to 3600s (1 hour * 60 minutes / hour * 60 seconds / minute). If the job is still in the QUEUED or PREPARING state after an hour of waiting, AI Platform Training cancels the job. If you submit your training job using the gcloud tool, you can specify this field in a config.yaml file. For example: yaml trainingInput: scheduling: maxWaitTime: 3600s
priority This property is required. Number
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority.

GoogleCloudMlV1__TrainingInputResponse

Args This property is required. List<string>
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
EnableWebAccess This property is required. bool
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
EncryptionConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__EncryptionConfigResponse
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
EvaluatorConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
EvaluatorCount This property is required. string
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
EvaluatorType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
Hyperparameters This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__HyperparameterSpecResponse
Optional. The set of Hyperparameters to tune.
JobDir This property is required. string
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
MasterConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
MasterType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
Network This property is required. string
Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
PackageUris This property is required. List<string>
The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
ParameterServerConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
ParameterServerCount This property is required. string
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
ParameterServerType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
PythonModule This property is required. string
The Python module name to run after installing the packages.
PythonVersion This property is required. string
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
Region This property is required. string
The region to run the training job in. See the available regions for AI Platform Training.
RuntimeVersion This property is required. string
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
ScaleTier This property is required. string
Specifies the machine types, the number of replicas for workers and parameter servers.
Scheduling This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__SchedulingResponse
Optional. Scheduling options for a training job.
ServiceAccount This property is required. string
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
UseChiefInTfConfig This property is required. bool
Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
WorkerConfig This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
WorkerCount This property is required. string
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
WorkerType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.
Args This property is required. []string
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
EnableWebAccess This property is required. bool
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
EncryptionConfig This property is required. GoogleCloudMlV1__EncryptionConfigResponse
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
EvaluatorConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
EvaluatorCount This property is required. string
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
EvaluatorType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
Hyperparameters This property is required. GoogleCloudMlV1__HyperparameterSpecResponse
Optional. The set of Hyperparameters to tune.
JobDir This property is required. string
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
MasterConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
MasterType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
Network This property is required. string
Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
PackageUris This property is required. []string
The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
ParameterServerConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
ParameterServerCount This property is required. string
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
ParameterServerType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
PythonModule This property is required. string
The Python module name to run after installing the packages.
PythonVersion This property is required. string
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
Region This property is required. string
The region to run the training job in. See the available regions for AI Platform Training.
RuntimeVersion This property is required. string
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
ScaleTier This property is required. string
Specifies the machine types, the number of replicas for workers and parameter servers.
Scheduling This property is required. GoogleCloudMlV1__SchedulingResponse
Optional. Scheduling options for a training job.
ServiceAccount This property is required. string
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
UseChiefInTfConfig This property is required. bool
Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
WorkerConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
WorkerCount This property is required. string
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
WorkerType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.
args This property is required. List<String>
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
enableWebAccess This property is required. Boolean
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
encryptionConfig This property is required. GoogleCloudMlV1__EncryptionConfigResponse
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
evaluatorConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
evaluatorCount This property is required. String
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
evaluatorType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
hyperparameters This property is required. GoogleCloudMlV1__HyperparameterSpecResponse
Optional. The set of Hyperparameters to tune.
jobDir This property is required. String
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
masterConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
masterType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
network This property is required. String
Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
packageUris This property is required. List<String>
The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
parameterServerConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
parameterServerCount This property is required. String
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
parameterServerType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
pythonModule This property is required. String
The Python module name to run after installing the packages.
pythonVersion This property is required. String
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
region This property is required. String
The region to run the training job in. See the available regions for AI Platform Training.
runtimeVersion This property is required. String
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
scaleTier This property is required. String
Specifies the machine types, the number of replicas for workers and parameter servers.
scheduling This property is required. GoogleCloudMlV1__SchedulingResponse
Optional. Scheduling options for a training job.
serviceAccount This property is required. String
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
useChiefInTfConfig This property is required. Boolean
Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
workerConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
workerCount This property is required. String
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
workerType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.
args This property is required. string[]
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
enableWebAccess This property is required. boolean
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
encryptionConfig This property is required. GoogleCloudMlV1__EncryptionConfigResponse
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
evaluatorConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
evaluatorCount This property is required. string
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
evaluatorType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
hyperparameters This property is required. GoogleCloudMlV1__HyperparameterSpecResponse
Optional. The set of Hyperparameters to tune.
jobDir This property is required. string
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
masterConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
masterType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
network This property is required. string
Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
packageUris This property is required. string[]
The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
parameterServerConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
parameterServerCount This property is required. string
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
parameterServerType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
pythonModule This property is required. string
The Python module name to run after installing the packages.
pythonVersion This property is required. string
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
region This property is required. string
The region to run the training job in. See the available regions for AI Platform Training.
runtimeVersion This property is required. string
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
scaleTier This property is required. string
Specifies the machine types, the number of replicas for workers and parameter servers.
scheduling This property is required. GoogleCloudMlV1__SchedulingResponse
Optional. Scheduling options for a training job.
serviceAccount This property is required. string
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
useChiefInTfConfig This property is required. boolean
Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
workerConfig This property is required. GoogleCloudMlV1__ReplicaConfigResponse
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
workerCount This property is required. string
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
workerType This property is required. string
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.
args This property is required. Sequence[str]
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
enable_web_access This property is required. bool
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
encryption_config This property is required. GoogleCloudMlV1EncryptionConfigResponse
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
evaluator_config This property is required. GoogleCloudMlV1ReplicaConfigResponse
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
evaluator_count This property is required. str
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
evaluator_type This property is required. str
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
hyperparameters This property is required. GoogleCloudMlV1HyperparameterSpecResponse
Optional. The set of Hyperparameters to tune.
job_dir This property is required. str
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
master_config This property is required. GoogleCloudMlV1ReplicaConfigResponse
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
master_type This property is required. str
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
network This property is required. str
Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
package_uris This property is required. Sequence[str]
The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
parameter_server_config This property is required. GoogleCloudMlV1ReplicaConfigResponse
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
parameter_server_count This property is required. str
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
parameter_server_type This property is required. str
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
python_module This property is required. str
The Python module name to run after installing the packages.
python_version This property is required. str
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
region This property is required. str
The region to run the training job in. See the available regions for AI Platform Training.
runtime_version This property is required. str
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
scale_tier This property is required. str
Specifies the machine types, the number of replicas for workers and parameter servers.
scheduling This property is required. GoogleCloudMlV1SchedulingResponse
Optional. Scheduling options for a training job.
service_account This property is required. str
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
use_chief_in_tf_config This property is required. bool
Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
worker_config This property is required. GoogleCloudMlV1ReplicaConfigResponse
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
worker_count This property is required. str
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
worker_type This property is required. str
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.
args This property is required. List<String>
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's ENTRYPOINT command.
enableWebAccess This property is required. Boolean
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).
encryptionConfig This property is required. Property Map
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.
evaluatorConfig This property is required. Property Map
Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
evaluatorCount This property is required. String
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.
evaluatorType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.
hyperparameters This property is required. Property Map
Optional. The set of Hyperparameters to tune.
jobDir This property is required. String
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
masterConfig This property is required. Property Map
Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.
masterType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.
network This property is required. String
Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..
packageUris This property is required. List<String>
The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.
parameterServerConfig This property is required. Property Map
Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
parameterServerCount This property is required. String
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.
parameterServerType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.
pythonModule This property is required. String
The Python module name to run after installing the packages.
pythonVersion This property is required. String
Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python '3.7' is available when runtime_version is set to '1.15' or later. * Python '3.5' is available when runtime_version is set to a version from '1.4' to '1.14'. * Python '2.7' is available when runtime_version is set to '1.15' or earlier. Read more about the Python versions available for each runtime version.
region This property is required. String
The region to run the training job in. See the available regions for AI Platform Training.
runtimeVersion This property is required. String
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.
scaleTier This property is required. String
Specifies the machine types, the number of replicas for workers and parameter servers.
scheduling This property is required. Property Map
Optional. Scheduling options for a training job.
serviceAccount This property is required. String
Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.
useChiefInTfConfig This property is required. Boolean
Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don't use a custom container.
workerConfig This property is required. Property Map
Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.
workerCount This property is required. String
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.
workerType This property is required. String
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.

GoogleCloudMlV1__TrainingOutputResponse

BuiltInAlgorithmOutput This property is required. Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs.
CompletedTrialCount This property is required. string
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
ConsumedMLUnits This property is required. double
The amount of ML units consumed by the job.
HyperparameterMetricTag This property is required. string
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See HyperparameterSpec.hyperparameterMetricTag for more information. Only set for hyperparameter tuning jobs.
IsBuiltInAlgorithmJob This property is required. bool
Whether this job is a built-in Algorithm job.
IsHyperparameterTuningJob This property is required. bool
Whether this job is a hyperparameter tuning job.
Trials This property is required. List<Pulumi.GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__HyperparameterOutputResponse>
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
WebAccessUris This property is required. Dictionary<string, string>
URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
BuiltInAlgorithmOutput This property is required. GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs.
CompletedTrialCount This property is required. string
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
ConsumedMLUnits This property is required. float64
The amount of ML units consumed by the job.
HyperparameterMetricTag This property is required. string
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See HyperparameterSpec.hyperparameterMetricTag for more information. Only set for hyperparameter tuning jobs.
IsBuiltInAlgorithmJob This property is required. bool
Whether this job is a built-in Algorithm job.
IsHyperparameterTuningJob This property is required. bool
Whether this job is a hyperparameter tuning job.
Trials This property is required. []GoogleCloudMlV1__HyperparameterOutputResponse
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
WebAccessUris This property is required. map[string]string
URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
builtInAlgorithmOutput This property is required. GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs.
completedTrialCount This property is required. String
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
consumedMLUnits This property is required. Double
The amount of ML units consumed by the job.
hyperparameterMetricTag This property is required. String
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See HyperparameterSpec.hyperparameterMetricTag for more information. Only set for hyperparameter tuning jobs.
isBuiltInAlgorithmJob This property is required. Boolean
Whether this job is a built-in Algorithm job.
isHyperparameterTuningJob This property is required. Boolean
Whether this job is a hyperparameter tuning job.
trials This property is required. List<GoogleCloudMlV1__HyperparameterOutputResponse>
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
webAccessUris This property is required. Map<String,String>
URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
builtInAlgorithmOutput This property is required. GoogleCloudMlV1__BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs.
completedTrialCount This property is required. string
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
consumedMLUnits This property is required. number
The amount of ML units consumed by the job.
hyperparameterMetricTag This property is required. string
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See HyperparameterSpec.hyperparameterMetricTag for more information. Only set for hyperparameter tuning jobs.
isBuiltInAlgorithmJob This property is required. boolean
Whether this job is a built-in Algorithm job.
isHyperparameterTuningJob This property is required. boolean
Whether this job is a hyperparameter tuning job.
trials This property is required. GoogleCloudMlV1__HyperparameterOutputResponse[]
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
webAccessUris This property is required. {[key: string]: string}
URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
built_in_algorithm_output This property is required. GoogleCloudMlV1BuiltInAlgorithmOutputResponse
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs.
completed_trial_count This property is required. str
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
consumed_ml_units This property is required. float
The amount of ML units consumed by the job.
hyperparameter_metric_tag This property is required. str
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See HyperparameterSpec.hyperparameterMetricTag for more information. Only set for hyperparameter tuning jobs.
is_built_in_algorithm_job This property is required. bool
Whether this job is a built-in Algorithm job.
is_hyperparameter_tuning_job This property is required. bool
Whether this job is a hyperparameter tuning job.
trials This property is required. Sequence[GoogleCloudMlV1HyperparameterOutputResponse]
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
web_access_uris This property is required. Mapping[str, str]
URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.
builtInAlgorithmOutput This property is required. Property Map
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs.
completedTrialCount This property is required. String
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
consumedMLUnits This property is required. Number
The amount of ML units consumed by the job.
hyperparameterMetricTag This property is required. String
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See HyperparameterSpec.hyperparameterMetricTag for more information. Only set for hyperparameter tuning jobs.
isBuiltInAlgorithmJob This property is required. Boolean
Whether this job is a built-in Algorithm job.
isHyperparameterTuningJob This property is required. Boolean
Whether this job is a hyperparameter tuning job.
trials This property is required. List<Property Map>
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
webAccessUris This property is required. Map<String>
URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is true. The keys are names of each node in the training job; for example, master-replica-0 for the master node, worker-replica-0 for the first worker, and ps-replica-0 for the first parameter server. The values are the URIs for each node's interactive shell.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi