1. Packages
  2. Google Cloud Native
  3. API Docs
  4. notebooks
  5. notebooks/v1
  6. getSchedule

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.notebooks/v1.getSchedule

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Gets details of schedule

Using getSchedule

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getSchedule(args: GetScheduleArgs, opts?: InvokeOptions): Promise<GetScheduleResult>
function getScheduleOutput(args: GetScheduleOutputArgs, opts?: InvokeOptions): Output<GetScheduleResult>
Copy
def get_schedule(location: Optional[str] = None,
                 project: Optional[str] = None,
                 schedule_id: Optional[str] = None,
                 opts: Optional[InvokeOptions] = None) -> GetScheduleResult
def get_schedule_output(location: Optional[pulumi.Input[str]] = None,
                 project: Optional[pulumi.Input[str]] = None,
                 schedule_id: Optional[pulumi.Input[str]] = None,
                 opts: Optional[InvokeOptions] = None) -> Output[GetScheduleResult]
Copy
func LookupSchedule(ctx *Context, args *LookupScheduleArgs, opts ...InvokeOption) (*LookupScheduleResult, error)
func LookupScheduleOutput(ctx *Context, args *LookupScheduleOutputArgs, opts ...InvokeOption) LookupScheduleResultOutput
Copy

> Note: This function is named LookupSchedule in the Go SDK.

public static class GetSchedule 
{
    public static Task<GetScheduleResult> InvokeAsync(GetScheduleArgs args, InvokeOptions? opts = null)
    public static Output<GetScheduleResult> Invoke(GetScheduleInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
Copy
fn::invoke:
  function: google-native:notebooks/v1:getSchedule
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

Location This property is required. string
ScheduleId This property is required. string
Project string
Location This property is required. string
ScheduleId This property is required. string
Project string
location This property is required. String
scheduleId This property is required. String
project String
location This property is required. string
scheduleId This property is required. string
project string
location This property is required. str
schedule_id This property is required. str
project str
location This property is required. String
scheduleId This property is required. String
project String

getSchedule Result

The following output properties are available:

CreateTime string
Time the schedule was created.
CronSchedule string
Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html
Description string
A brief description of this environment.
DisplayName string
Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.
ExecutionTemplate Pulumi.GoogleNative.Notebooks.V1.Outputs.ExecutionTemplateResponse
Notebook Execution Template corresponding to this schedule.
Name string
The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
RecentExecutions List<Pulumi.GoogleNative.Notebooks.V1.Outputs.ExecutionResponse>
The most recent execution names triggered from this schedule and their corresponding states.
State string
TimeZone string
Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
UpdateTime string
Time the schedule was last updated.
CreateTime string
Time the schedule was created.
CronSchedule string
Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html
Description string
A brief description of this environment.
DisplayName string
Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.
ExecutionTemplate ExecutionTemplateResponse
Notebook Execution Template corresponding to this schedule.
Name string
The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
RecentExecutions []ExecutionResponse
The most recent execution names triggered from this schedule and their corresponding states.
State string
TimeZone string
Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
UpdateTime string
Time the schedule was last updated.
createTime String
Time the schedule was created.
cronSchedule String
Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html
description String
A brief description of this environment.
displayName String
Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.
executionTemplate ExecutionTemplateResponse
Notebook Execution Template corresponding to this schedule.
name String
The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
recentExecutions List<ExecutionResponse>
The most recent execution names triggered from this schedule and their corresponding states.
state String
timeZone String
Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
updateTime String
Time the schedule was last updated.
createTime string
Time the schedule was created.
cronSchedule string
Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html
description string
A brief description of this environment.
displayName string
Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.
executionTemplate ExecutionTemplateResponse
Notebook Execution Template corresponding to this schedule.
name string
The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
recentExecutions ExecutionResponse[]
The most recent execution names triggered from this schedule and their corresponding states.
state string
timeZone string
Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
updateTime string
Time the schedule was last updated.
create_time str
Time the schedule was created.
cron_schedule str
Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html
description str
A brief description of this environment.
display_name str
Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.
execution_template ExecutionTemplateResponse
Notebook Execution Template corresponding to this schedule.
name str
The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
recent_executions Sequence[ExecutionResponse]
The most recent execution names triggered from this schedule and their corresponding states.
state str
time_zone str
Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
update_time str
Time the schedule was last updated.
createTime String
Time the schedule was created.
cronSchedule String
Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g. 0 0 * * WED = every Wednesday More examples: https://crontab.guru/examples.html
description String
A brief description of this environment.
displayName String
Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens -, and underscores _.
executionTemplate Property Map
Notebook Execution Template corresponding to this schedule.
name String
The name of this schedule. Format: projects/{project_id}/locations/{location}/schedules/{schedule_id}
recentExecutions List<Property Map>
The most recent execution names triggered from this schedule and their corresponding states.
state String
timeZone String
Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
updateTime String
Time the schedule was last updated.

Supporting Types

DataprocParametersResponse

Cluster This property is required. string
URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
Cluster This property is required. string
URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
cluster This property is required. String
URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
cluster This property is required. string
URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
cluster This property is required. str
URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
cluster This property is required. String
URI for cluster used to run Dataproc execution. Format: projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}

ExecutionResponse

CreateTime This property is required. string
Time the Execution was instantiated.
Description This property is required. string
A brief description of this execution.
DisplayName This property is required. string
Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
ExecutionTemplate This property is required. Pulumi.GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateResponse
execute metadata including name, hardware spec, region, labels, etc.
JobUri This property is required. string
The URI of the external job used to execute the notebook.
Name This property is required. string
The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
OutputNotebookFile This property is required. string
Output notebook file generated by this execution
State This property is required. string
State of the underlying AI Platform job.
UpdateTime This property is required. string
Time the Execution was last updated.
CreateTime This property is required. string
Time the Execution was instantiated.
Description This property is required. string
A brief description of this execution.
DisplayName This property is required. string
Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
ExecutionTemplate This property is required. ExecutionTemplateResponse
execute metadata including name, hardware spec, region, labels, etc.
JobUri This property is required. string
The URI of the external job used to execute the notebook.
Name This property is required. string
The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
OutputNotebookFile This property is required. string
Output notebook file generated by this execution
State This property is required. string
State of the underlying AI Platform job.
UpdateTime This property is required. string
Time the Execution was last updated.
createTime This property is required. String
Time the Execution was instantiated.
description This property is required. String
A brief description of this execution.
displayName This property is required. String
Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
executionTemplate This property is required. ExecutionTemplateResponse
execute metadata including name, hardware spec, region, labels, etc.
jobUri This property is required. String
The URI of the external job used to execute the notebook.
name This property is required. String
The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
outputNotebookFile This property is required. String
Output notebook file generated by this execution
state This property is required. String
State of the underlying AI Platform job.
updateTime This property is required. String
Time the Execution was last updated.
createTime This property is required. string
Time the Execution was instantiated.
description This property is required. string
A brief description of this execution.
displayName This property is required. string
Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
executionTemplate This property is required. ExecutionTemplateResponse
execute metadata including name, hardware spec, region, labels, etc.
jobUri This property is required. string
The URI of the external job used to execute the notebook.
name This property is required. string
The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
outputNotebookFile This property is required. string
Output notebook file generated by this execution
state This property is required. string
State of the underlying AI Platform job.
updateTime This property is required. string
Time the Execution was last updated.
create_time This property is required. str
Time the Execution was instantiated.
description This property is required. str
A brief description of this execution.
display_name This property is required. str
Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
execution_template This property is required. ExecutionTemplateResponse
execute metadata including name, hardware spec, region, labels, etc.
job_uri This property is required. str
The URI of the external job used to execute the notebook.
name This property is required. str
The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
output_notebook_file This property is required. str
Output notebook file generated by this execution
state This property is required. str
State of the underlying AI Platform job.
update_time This property is required. str
Time the Execution was last updated.
createTime This property is required. String
Time the Execution was instantiated.
description This property is required. String
A brief description of this execution.
displayName This property is required. String
Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
executionTemplate This property is required. Property Map
execute metadata including name, hardware spec, region, labels, etc.
jobUri This property is required. String
The URI of the external job used to execute the notebook.
name This property is required. String
The resource name of the execute. Format: projects/{project_id}/locations/{location}/executions/{execution_id}
outputNotebookFile This property is required. String
Output notebook file generated by this execution
state This property is required. String
State of the underlying AI Platform job.
updateTime This property is required. String
Time the Execution was last updated.

ExecutionTemplateResponse

AcceleratorConfig This property is required. Pulumi.GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigResponse
Configuration (count and accelerator type) for hardware running notebook execution.
ContainerImageUri This property is required. string
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
DataprocParameters This property is required. Pulumi.GoogleNative.Notebooks.V1.Inputs.DataprocParametersResponse
Parameters used in Dataproc JobType executions.
InputNotebookFile This property is required. string
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
JobType This property is required. string
The type of Job to be used on this execution.
KernelSpec This property is required. string
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
Labels This property is required. Dictionary<string, string>
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
MasterType This property is required. string
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
OutputNotebookFolder This property is required. string
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
Parameters This property is required. string
Parameters used within the 'input_notebook_file' notebook.
ParamsYamlFile This property is required. string
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
ScaleTier This property is required. string
Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

ServiceAccount This property is required. string
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
Tensorboard This property is required. string
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
VertexAiParameters This property is required. Pulumi.GoogleNative.Notebooks.V1.Inputs.VertexAIParametersResponse
Parameters used in Vertex AI JobType executions.
AcceleratorConfig This property is required. SchedulerAcceleratorConfigResponse
Configuration (count and accelerator type) for hardware running notebook execution.
ContainerImageUri This property is required. string
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
DataprocParameters This property is required. DataprocParametersResponse
Parameters used in Dataproc JobType executions.
InputNotebookFile This property is required. string
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
JobType This property is required. string
The type of Job to be used on this execution.
KernelSpec This property is required. string
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
Labels This property is required. map[string]string
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
MasterType This property is required. string
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
OutputNotebookFolder This property is required. string
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
Parameters This property is required. string
Parameters used within the 'input_notebook_file' notebook.
ParamsYamlFile This property is required. string
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
ScaleTier This property is required. string
Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

ServiceAccount This property is required. string
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
Tensorboard This property is required. string
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
VertexAiParameters This property is required. VertexAIParametersResponse
Parameters used in Vertex AI JobType executions.
acceleratorConfig This property is required. SchedulerAcceleratorConfigResponse
Configuration (count and accelerator type) for hardware running notebook execution.
containerImageUri This property is required. String
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
dataprocParameters This property is required. DataprocParametersResponse
Parameters used in Dataproc JobType executions.
inputNotebookFile This property is required. String
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
jobType This property is required. String
The type of Job to be used on this execution.
kernelSpec This property is required. String
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
labels This property is required. Map<String,String>
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
masterType This property is required. String
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
outputNotebookFolder This property is required. String
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
parameters This property is required. String
Parameters used within the 'input_notebook_file' notebook.
paramsYamlFile This property is required. String
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
scaleTier This property is required. String
Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount This property is required. String
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
tensorboard This property is required. String
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
vertexAiParameters This property is required. VertexAIParametersResponse
Parameters used in Vertex AI JobType executions.
acceleratorConfig This property is required. SchedulerAcceleratorConfigResponse
Configuration (count and accelerator type) for hardware running notebook execution.
containerImageUri This property is required. string
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
dataprocParameters This property is required. DataprocParametersResponse
Parameters used in Dataproc JobType executions.
inputNotebookFile This property is required. string
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
jobType This property is required. string
The type of Job to be used on this execution.
kernelSpec This property is required. string
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
labels This property is required. {[key: string]: string}
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
masterType This property is required. string
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
outputNotebookFolder This property is required. string
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
parameters This property is required. string
Parameters used within the 'input_notebook_file' notebook.
paramsYamlFile This property is required. string
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
scaleTier This property is required. string
Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount This property is required. string
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
tensorboard This property is required. string
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
vertexAiParameters This property is required. VertexAIParametersResponse
Parameters used in Vertex AI JobType executions.
accelerator_config This property is required. SchedulerAcceleratorConfigResponse
Configuration (count and accelerator type) for hardware running notebook execution.
container_image_uri This property is required. str
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
dataproc_parameters This property is required. DataprocParametersResponse
Parameters used in Dataproc JobType executions.
input_notebook_file This property is required. str
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
job_type This property is required. str
The type of Job to be used on this execution.
kernel_spec This property is required. str
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
labels This property is required. Mapping[str, str]
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
master_type This property is required. str
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
output_notebook_folder This property is required. str
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
parameters This property is required. str
Parameters used within the 'input_notebook_file' notebook.
params_yaml_file This property is required. str
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
scale_tier This property is required. str
Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

service_account This property is required. str
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
tensorboard This property is required. str
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
vertex_ai_parameters This property is required. VertexAIParametersResponse
Parameters used in Vertex AI JobType executions.
acceleratorConfig This property is required. Property Map
Configuration (count and accelerator type) for hardware running notebook execution.
containerImageUri This property is required. String
Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
dataprocParameters This property is required. Property Map
Parameters used in Dataproc JobType executions.
inputNotebookFile This property is required. String
Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format: gs://{bucket_name}/{folder}/{notebook_file_name} Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
jobType This property is required. String
The type of Job to be used on this execution.
kernelSpec This property is required. String
Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
labels This property is required. Map<String>
Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
masterType This property is required. String
Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. The following types are supported: - n1-standard-4 - n1-standard-8 - n1-standard-16 - n1-standard-32 - n1-standard-64 - n1-standard-96 - n1-highmem-2 - n1-highmem-4 - n1-highmem-8 - n1-highmem-16 - n1-highmem-32 - n1-highmem-64 - n1-highmem-96 - n1-highcpu-16 - n1-highcpu-32 - n1-highcpu-64 - n1-highcpu-96 Alternatively, you can use the following legacy machine types: - standard - large_model - complex_model_s - complex_model_m - complex_model_l - standard_gpu - complex_model_m_gpu - complex_model_l_gpu - standard_p100 - complex_model_m_p100 - standard_v100 - large_model_v100 - complex_model_m_v100 - complex_model_l_v100 Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPU.
outputNotebookFolder This property is required. String
Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format: gs://{bucket_name}/{folder} Ex: gs://notebook_user/scheduled_notebooks
parameters This property is required. String
Parameters used within the 'input_notebook_file' notebook.
paramsYamlFile This property is required. String
Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex: gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
scaleTier This property is required. String
Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

Deprecated: Required. Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.

serviceAccount This property is required. String
The email address of a service account to use when running the execution. You must have the iam.serviceAccounts.actAs permission for the specified service account.
tensorboard This property is required. String
The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
vertexAiParameters This property is required. Property Map
Parameters used in Vertex AI JobType executions.

SchedulerAcceleratorConfigResponse

CoreCount This property is required. string
Count of cores of this accelerator.
Type This property is required. string
Type of this accelerator.
CoreCount This property is required. string
Count of cores of this accelerator.
Type This property is required. string
Type of this accelerator.
coreCount This property is required. String
Count of cores of this accelerator.
type This property is required. String
Type of this accelerator.
coreCount This property is required. string
Count of cores of this accelerator.
type This property is required. string
Type of this accelerator.
core_count This property is required. str
Count of cores of this accelerator.
type This property is required. str
Type of this accelerator.
coreCount This property is required. String
Count of cores of this accelerator.
type This property is required. String
Type of this accelerator.

VertexAIParametersResponse

Env This property is required. Dictionary<string, string>
Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
Network This property is required. string
The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Env This property is required. map[string]string
Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
Network This property is required. string
The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
env This property is required. Map<String,String>
Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
network This property is required. String
The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
env This property is required. {[key: string]: string}
Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
network This property is required. string
The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
env This property is required. Mapping[str, str]
Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
network This property is required. str
The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
env This property is required. Map<String>
Environment variables. At most 100 environment variables can be specified and unique. Example: GCP_BUCKET=gs://my-bucket/samples/
network This property is required. String
The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi