1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. SessionTemplate

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.SessionTemplate

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Create a session template synchronously.

Create SessionTemplate Resource

Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

Constructor syntax

new SessionTemplate(name: string, args?: SessionTemplateArgs, opts?: CustomResourceOptions);
@overload
def SessionTemplate(resource_name: str,
                    args: Optional[SessionTemplateArgs] = None,
                    opts: Optional[ResourceOptions] = None)

@overload
def SessionTemplate(resource_name: str,
                    opts: Optional[ResourceOptions] = None,
                    description: Optional[str] = None,
                    environment_config: Optional[EnvironmentConfigArgs] = None,
                    jupyter_session: Optional[JupyterConfigArgs] = None,
                    labels: Optional[Mapping[str, str]] = None,
                    location: Optional[str] = None,
                    name: Optional[str] = None,
                    project: Optional[str] = None,
                    runtime_config: Optional[RuntimeConfigArgs] = None)
func NewSessionTemplate(ctx *Context, name string, args *SessionTemplateArgs, opts ...ResourceOption) (*SessionTemplate, error)
public SessionTemplate(string name, SessionTemplateArgs? args = null, CustomResourceOptions? opts = null)
public SessionTemplate(String name, SessionTemplateArgs args)
public SessionTemplate(String name, SessionTemplateArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:SessionTemplate
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

Parameters

name This property is required. string
The unique name of the resource.
args SessionTemplateArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name This property is required. str
The unique name of the resource.
args SessionTemplateArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name This property is required. string
The unique name of the resource.
args SessionTemplateArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name This property is required. string
The unique name of the resource.
args SessionTemplateArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name This property is required. String
The unique name of the resource.
args This property is required. SessionTemplateArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Constructor example

The following reference example uses placeholder values for all input properties.

var sessionTemplateResource = new GoogleNative.Dataproc.V1.SessionTemplate("sessionTemplateResource", new()
{
    Description = "string",
    EnvironmentConfig = new GoogleNative.Dataproc.V1.Inputs.EnvironmentConfigArgs
    {
        ExecutionConfig = new GoogleNative.Dataproc.V1.Inputs.ExecutionConfigArgs
        {
            IdleTtl = "string",
            KmsKey = "string",
            NetworkTags = new[]
            {
                "string",
            },
            NetworkUri = "string",
            ServiceAccount = "string",
            StagingBucket = "string",
            SubnetworkUri = "string",
            Ttl = "string",
        },
        PeripheralsConfig = new GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigArgs
        {
            MetastoreService = "string",
            SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
            {
                DataprocCluster = "string",
            },
        },
    },
    JupyterSession = new GoogleNative.Dataproc.V1.Inputs.JupyterConfigArgs
    {
        DisplayName = "string",
        Kernel = GoogleNative.Dataproc.V1.JupyterConfigKernel.KernelUnspecified,
    },
    Labels = 
    {
        { "string", "string" },
    },
    Location = "string",
    Name = "string",
    Project = "string",
    RuntimeConfig = new GoogleNative.Dataproc.V1.Inputs.RuntimeConfigArgs
    {
        ContainerImage = "string",
        Properties = 
        {
            { "string", "string" },
        },
        RepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.RepositoryConfigArgs
        {
            PypiRepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.PyPiRepositoryConfigArgs
            {
                PypiRepository = "string",
            },
        },
        Version = "string",
    },
});
Copy
example, err := dataproc.NewSessionTemplate(ctx, "sessionTemplateResource", &dataproc.SessionTemplateArgs{
	Description: pulumi.String("string"),
	EnvironmentConfig: &dataproc.EnvironmentConfigArgs{
		ExecutionConfig: &dataproc.ExecutionConfigArgs{
			IdleTtl: pulumi.String("string"),
			KmsKey:  pulumi.String("string"),
			NetworkTags: pulumi.StringArray{
				pulumi.String("string"),
			},
			NetworkUri:     pulumi.String("string"),
			ServiceAccount: pulumi.String("string"),
			StagingBucket:  pulumi.String("string"),
			SubnetworkUri:  pulumi.String("string"),
			Ttl:            pulumi.String("string"),
		},
		PeripheralsConfig: &dataproc.PeripheralsConfigArgs{
			MetastoreService: pulumi.String("string"),
			SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
				DataprocCluster: pulumi.String("string"),
			},
		},
	},
	JupyterSession: &dataproc.JupyterConfigArgs{
		DisplayName: pulumi.String("string"),
		Kernel:      dataproc.JupyterConfigKernelKernelUnspecified,
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Location: pulumi.String("string"),
	Name:     pulumi.String("string"),
	Project:  pulumi.String("string"),
	RuntimeConfig: &dataproc.RuntimeConfigArgs{
		ContainerImage: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		RepositoryConfig: &dataproc.RepositoryConfigArgs{
			PypiRepositoryConfig: &dataproc.PyPiRepositoryConfigArgs{
				PypiRepository: pulumi.String("string"),
			},
		},
		Version: pulumi.String("string"),
	},
})
Copy
var sessionTemplateResource = new SessionTemplate("sessionTemplateResource", SessionTemplateArgs.builder()
    .description("string")
    .environmentConfig(EnvironmentConfigArgs.builder()
        .executionConfig(ExecutionConfigArgs.builder()
            .idleTtl("string")
            .kmsKey("string")
            .networkTags("string")
            .networkUri("string")
            .serviceAccount("string")
            .stagingBucket("string")
            .subnetworkUri("string")
            .ttl("string")
            .build())
        .peripheralsConfig(PeripheralsConfigArgs.builder()
            .metastoreService("string")
            .sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
                .dataprocCluster("string")
                .build())
            .build())
        .build())
    .jupyterSession(JupyterConfigArgs.builder()
        .displayName("string")
        .kernel("KERNEL_UNSPECIFIED")
        .build())
    .labels(Map.of("string", "string"))
    .location("string")
    .name("string")
    .project("string")
    .runtimeConfig(RuntimeConfigArgs.builder()
        .containerImage("string")
        .properties(Map.of("string", "string"))
        .repositoryConfig(RepositoryConfigArgs.builder()
            .pypiRepositoryConfig(PyPiRepositoryConfigArgs.builder()
                .pypiRepository("string")
                .build())
            .build())
        .version("string")
        .build())
    .build());
Copy
session_template_resource = google_native.dataproc.v1.SessionTemplate("sessionTemplateResource",
    description="string",
    environment_config={
        "execution_config": {
            "idle_ttl": "string",
            "kms_key": "string",
            "network_tags": ["string"],
            "network_uri": "string",
            "service_account": "string",
            "staging_bucket": "string",
            "subnetwork_uri": "string",
            "ttl": "string",
        },
        "peripherals_config": {
            "metastore_service": "string",
            "spark_history_server_config": {
                "dataproc_cluster": "string",
            },
        },
    },
    jupyter_session={
        "display_name": "string",
        "kernel": google_native.dataproc.v1.JupyterConfigKernel.KERNEL_UNSPECIFIED,
    },
    labels={
        "string": "string",
    },
    location="string",
    name="string",
    project="string",
    runtime_config={
        "container_image": "string",
        "properties": {
            "string": "string",
        },
        "repository_config": {
            "pypi_repository_config": {
                "pypi_repository": "string",
            },
        },
        "version": "string",
    })
Copy
const sessionTemplateResource = new google_native.dataproc.v1.SessionTemplate("sessionTemplateResource", {
    description: "string",
    environmentConfig: {
        executionConfig: {
            idleTtl: "string",
            kmsKey: "string",
            networkTags: ["string"],
            networkUri: "string",
            serviceAccount: "string",
            stagingBucket: "string",
            subnetworkUri: "string",
            ttl: "string",
        },
        peripheralsConfig: {
            metastoreService: "string",
            sparkHistoryServerConfig: {
                dataprocCluster: "string",
            },
        },
    },
    jupyterSession: {
        displayName: "string",
        kernel: google_native.dataproc.v1.JupyterConfigKernel.KernelUnspecified,
    },
    labels: {
        string: "string",
    },
    location: "string",
    name: "string",
    project: "string",
    runtimeConfig: {
        containerImage: "string",
        properties: {
            string: "string",
        },
        repositoryConfig: {
            pypiRepositoryConfig: {
                pypiRepository: "string",
            },
        },
        version: "string",
    },
});
Copy
type: google-native:dataproc/v1:SessionTemplate
properties:
    description: string
    environmentConfig:
        executionConfig:
            idleTtl: string
            kmsKey: string
            networkTags:
                - string
            networkUri: string
            serviceAccount: string
            stagingBucket: string
            subnetworkUri: string
            ttl: string
        peripheralsConfig:
            metastoreService: string
            sparkHistoryServerConfig:
                dataprocCluster: string
    jupyterSession:
        displayName: string
        kernel: KERNEL_UNSPECIFIED
    labels:
        string: string
    location: string
    name: string
    project: string
    runtimeConfig:
        containerImage: string
        properties:
            string: string
        repositoryConfig:
            pypiRepositoryConfig:
                pypiRepository: string
        version: string
Copy

SessionTemplate Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

The SessionTemplate resource accepts the following input properties:

Description string
Optional. Brief description of the template.
EnvironmentConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EnvironmentConfig
Optional. Environment configuration for session execution.
JupyterSession Pulumi.GoogleNative.Dataproc.V1.Inputs.JupyterConfig
Optional. Jupyter session config.
Labels Dictionary<string, string>
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
Location Changes to this property will trigger replacement. string
Name string
The resource name of the session template.
Project Changes to this property will trigger replacement. string
RuntimeConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RuntimeConfig
Optional. Runtime configuration for session execution.
Description string
Optional. Brief description of the template.
EnvironmentConfig EnvironmentConfigArgs
Optional. Environment configuration for session execution.
JupyterSession JupyterConfigArgs
Optional. Jupyter session config.
Labels map[string]string
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
Location Changes to this property will trigger replacement. string
Name string
The resource name of the session template.
Project Changes to this property will trigger replacement. string
RuntimeConfig RuntimeConfigArgs
Optional. Runtime configuration for session execution.
description String
Optional. Brief description of the template.
environmentConfig EnvironmentConfig
Optional. Environment configuration for session execution.
jupyterSession JupyterConfig
Optional. Jupyter session config.
labels Map<String,String>
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
location Changes to this property will trigger replacement. String
name String
The resource name of the session template.
project Changes to this property will trigger replacement. String
runtimeConfig RuntimeConfig
Optional. Runtime configuration for session execution.
description string
Optional. Brief description of the template.
environmentConfig EnvironmentConfig
Optional. Environment configuration for session execution.
jupyterSession JupyterConfig
Optional. Jupyter session config.
labels {[key: string]: string}
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
location Changes to this property will trigger replacement. string
name string
The resource name of the session template.
project Changes to this property will trigger replacement. string
runtimeConfig RuntimeConfig
Optional. Runtime configuration for session execution.
description str
Optional. Brief description of the template.
environment_config EnvironmentConfigArgs
Optional. Environment configuration for session execution.
jupyter_session JupyterConfigArgs
Optional. Jupyter session config.
labels Mapping[str, str]
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
location Changes to this property will trigger replacement. str
name str
The resource name of the session template.
project Changes to this property will trigger replacement. str
runtime_config RuntimeConfigArgs
Optional. Runtime configuration for session execution.
description String
Optional. Brief description of the template.
environmentConfig Property Map
Optional. Environment configuration for session execution.
jupyterSession Property Map
Optional. Jupyter session config.
labels Map<String>
Optional. Labels to associate with sessions created using this template. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty, but, if present, must contain 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
location Changes to this property will trigger replacement. String
name String
The resource name of the session template.
project Changes to this property will trigger replacement. String
runtimeConfig Property Map
Optional. Runtime configuration for session execution.

Outputs

All input properties are implicitly available as output properties. Additionally, the SessionTemplate resource produces the following output properties:

CreateTime string
The time when the template was created.
Creator string
The email address of the user who created the template.
Id string
The provider-assigned unique ID for this managed resource.
UpdateTime string
The time the template was last updated.
Uuid string
A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
CreateTime string
The time when the template was created.
Creator string
The email address of the user who created the template.
Id string
The provider-assigned unique ID for this managed resource.
UpdateTime string
The time the template was last updated.
Uuid string
A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
createTime String
The time when the template was created.
creator String
The email address of the user who created the template.
id String
The provider-assigned unique ID for this managed resource.
updateTime String
The time the template was last updated.
uuid String
A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
createTime string
The time when the template was created.
creator string
The email address of the user who created the template.
id string
The provider-assigned unique ID for this managed resource.
updateTime string
The time the template was last updated.
uuid string
A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
create_time str
The time when the template was created.
creator str
The email address of the user who created the template.
id str
The provider-assigned unique ID for this managed resource.
update_time str
The time the template was last updated.
uuid str
A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.
createTime String
The time when the template was created.
creator String
The email address of the user who created the template.
id String
The provider-assigned unique ID for this managed resource.
updateTime String
The time the template was last updated.
uuid String
A session template UUID (Unique Universal Identifier). The service generates this value when it creates the session template.

Supporting Types

EnvironmentConfig
, EnvironmentConfigArgs

ExecutionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfig
Optional. Execution configuration for a workload.
PeripheralsConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfig
Optional. Peripherals configuration that workload has access to.
ExecutionConfig ExecutionConfig
Optional. Execution configuration for a workload.
PeripheralsConfig PeripheralsConfig
Optional. Peripherals configuration that workload has access to.
executionConfig ExecutionConfig
Optional. Execution configuration for a workload.
peripheralsConfig PeripheralsConfig
Optional. Peripherals configuration that workload has access to.
executionConfig ExecutionConfig
Optional. Execution configuration for a workload.
peripheralsConfig PeripheralsConfig
Optional. Peripherals configuration that workload has access to.
execution_config ExecutionConfig
Optional. Execution configuration for a workload.
peripherals_config PeripheralsConfig
Optional. Peripherals configuration that workload has access to.
executionConfig Property Map
Optional. Execution configuration for a workload.
peripheralsConfig Property Map
Optional. Peripherals configuration that workload has access to.

EnvironmentConfigResponse
, EnvironmentConfigResponseArgs

ExecutionConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.ExecutionConfigResponse
Optional. Execution configuration for a workload.
PeripheralsConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
ExecutionConfig This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
PeripheralsConfig This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
executionConfig This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
peripheralsConfig This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
executionConfig This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
peripheralsConfig This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
execution_config This property is required. ExecutionConfigResponse
Optional. Execution configuration for a workload.
peripherals_config This property is required. PeripheralsConfigResponse
Optional. Peripherals configuration that workload has access to.
executionConfig This property is required. Property Map
Optional. Execution configuration for a workload.
peripheralsConfig This property is required. Property Map
Optional. Peripherals configuration that workload has access to.

ExecutionConfig
, ExecutionConfigArgs

IdleTtl string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
KmsKey string
Optional. The Cloud KMS key to use for encryption.
NetworkTags List<string>
Optional. Tags used for network traffic control.
NetworkUri string
Optional. Network URI to connect workload to.
ServiceAccount string
Optional. Service account that used to execute workload.
StagingBucket string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
SubnetworkUri string
Optional. Subnetwork URI to connect workload to.
Ttl string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
IdleTtl string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
KmsKey string
Optional. The Cloud KMS key to use for encryption.
NetworkTags []string
Optional. Tags used for network traffic control.
NetworkUri string
Optional. Network URI to connect workload to.
ServiceAccount string
Optional. Service account that used to execute workload.
StagingBucket string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
SubnetworkUri string
Optional. Subnetwork URI to connect workload to.
Ttl string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl String
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey String
Optional. The Cloud KMS key to use for encryption.
networkTags List<String>
Optional. Tags used for network traffic control.
networkUri String
Optional. Network URI to connect workload to.
serviceAccount String
Optional. Service account that used to execute workload.
stagingBucket String
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri String
Optional. Subnetwork URI to connect workload to.
ttl String
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey string
Optional. The Cloud KMS key to use for encryption.
networkTags string[]
Optional. Tags used for network traffic control.
networkUri string
Optional. Network URI to connect workload to.
serviceAccount string
Optional. Service account that used to execute workload.
stagingBucket string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri string
Optional. Subnetwork URI to connect workload to.
ttl string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idle_ttl str
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kms_key str
Optional. The Cloud KMS key to use for encryption.
network_tags Sequence[str]
Optional. Tags used for network traffic control.
network_uri str
Optional. Network URI to connect workload to.
service_account str
Optional. Service account that used to execute workload.
staging_bucket str
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetwork_uri str
Optional. Subnetwork URI to connect workload to.
ttl str
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl String
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey String
Optional. The Cloud KMS key to use for encryption.
networkTags List<String>
Optional. Tags used for network traffic control.
networkUri String
Optional. Network URI to connect workload to.
serviceAccount String
Optional. Service account that used to execute workload.
stagingBucket String
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri String
Optional. Subnetwork URI to connect workload to.
ttl String
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

ExecutionConfigResponse
, ExecutionConfigResponseArgs

IdleTtl This property is required. string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
KmsKey This property is required. string
Optional. The Cloud KMS key to use for encryption.
NetworkTags This property is required. List<string>
Optional. Tags used for network traffic control.
NetworkUri This property is required. string
Optional. Network URI to connect workload to.
ServiceAccount This property is required. string
Optional. Service account that used to execute workload.
StagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
SubnetworkUri This property is required. string
Optional. Subnetwork URI to connect workload to.
Ttl This property is required. string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
IdleTtl This property is required. string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
KmsKey This property is required. string
Optional. The Cloud KMS key to use for encryption.
NetworkTags This property is required. []string
Optional. Tags used for network traffic control.
NetworkUri This property is required. string
Optional. Network URI to connect workload to.
ServiceAccount This property is required. string
Optional. Service account that used to execute workload.
StagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
SubnetworkUri This property is required. string
Optional. Subnetwork URI to connect workload to.
Ttl This property is required. string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl This property is required. String
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey This property is required. String
Optional. The Cloud KMS key to use for encryption.
networkTags This property is required. List<String>
Optional. Tags used for network traffic control.
networkUri This property is required. String
Optional. Network URI to connect workload to.
serviceAccount This property is required. String
Optional. Service account that used to execute workload.
stagingBucket This property is required. String
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri This property is required. String
Optional. Subnetwork URI to connect workload to.
ttl This property is required. String
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl This property is required. string
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey This property is required. string
Optional. The Cloud KMS key to use for encryption.
networkTags This property is required. string[]
Optional. Tags used for network traffic control.
networkUri This property is required. string
Optional. Network URI to connect workload to.
serviceAccount This property is required. string
Optional. Service account that used to execute workload.
stagingBucket This property is required. string
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri This property is required. string
Optional. Subnetwork URI to connect workload to.
ttl This property is required. string
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idle_ttl This property is required. str
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kms_key This property is required. str
Optional. The Cloud KMS key to use for encryption.
network_tags This property is required. Sequence[str]
Optional. Tags used for network traffic control.
network_uri This property is required. str
Optional. Network URI to connect workload to.
service_account This property is required. str
Optional. Service account that used to execute workload.
staging_bucket This property is required. str
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetwork_uri This property is required. str
Optional. Subnetwork URI to connect workload to.
ttl This property is required. str
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
idleTtl This property is required. String
Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
kmsKey This property is required. String
Optional. The Cloud KMS key to use for encryption.
networkTags This property is required. List<String>
Optional. Tags used for network traffic control.
networkUri This property is required. String
Optional. Network URI to connect workload to.
serviceAccount This property is required. String
Optional. Service account that used to execute workload.
stagingBucket This property is required. String
Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
subnetworkUri This property is required. String
Optional. Subnetwork URI to connect workload to.
ttl This property is required. String
Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.

JupyterConfig
, JupyterConfigArgs

DisplayName string
Optional. Display name, shown in the Jupyter kernelspec card.
Kernel Pulumi.GoogleNative.Dataproc.V1.JupyterConfigKernel
Optional. Kernel
DisplayName string
Optional. Display name, shown in the Jupyter kernelspec card.
Kernel JupyterConfigKernel
Optional. Kernel
displayName String
Optional. Display name, shown in the Jupyter kernelspec card.
kernel JupyterConfigKernel
Optional. Kernel
displayName string
Optional. Display name, shown in the Jupyter kernelspec card.
kernel JupyterConfigKernel
Optional. Kernel
display_name str
Optional. Display name, shown in the Jupyter kernelspec card.
kernel JupyterConfigKernel
Optional. Kernel
displayName String
Optional. Display name, shown in the Jupyter kernelspec card.
kernel "KERNEL_UNSPECIFIED" | "PYTHON" | "SCALA"
Optional. Kernel

JupyterConfigKernel
, JupyterConfigKernelArgs

KernelUnspecified
KERNEL_UNSPECIFIEDThe kernel is unknown.
Python
PYTHONPython kernel.
Scala
SCALAScala kernel.
JupyterConfigKernelKernelUnspecified
KERNEL_UNSPECIFIEDThe kernel is unknown.
JupyterConfigKernelPython
PYTHONPython kernel.
JupyterConfigKernelScala
SCALAScala kernel.
KernelUnspecified
KERNEL_UNSPECIFIEDThe kernel is unknown.
Python
PYTHONPython kernel.
Scala
SCALAScala kernel.
KernelUnspecified
KERNEL_UNSPECIFIEDThe kernel is unknown.
Python
PYTHONPython kernel.
Scala
SCALAScala kernel.
KERNEL_UNSPECIFIED
KERNEL_UNSPECIFIEDThe kernel is unknown.
PYTHON
PYTHONPython kernel.
SCALA
SCALAScala kernel.
"KERNEL_UNSPECIFIED"
KERNEL_UNSPECIFIEDThe kernel is unknown.
"PYTHON"
PYTHONPython kernel.
"SCALA"
SCALAScala kernel.

JupyterConfigResponse
, JupyterConfigResponseArgs

DisplayName This property is required. string
Optional. Display name, shown in the Jupyter kernelspec card.
Kernel This property is required. string
Optional. Kernel
DisplayName This property is required. string
Optional. Display name, shown in the Jupyter kernelspec card.
Kernel This property is required. string
Optional. Kernel
displayName This property is required. String
Optional. Display name, shown in the Jupyter kernelspec card.
kernel This property is required. String
Optional. Kernel
displayName This property is required. string
Optional. Display name, shown in the Jupyter kernelspec card.
kernel This property is required. string
Optional. Kernel
display_name This property is required. str
Optional. Display name, shown in the Jupyter kernelspec card.
kernel This property is required. str
Optional. Kernel
displayName This property is required. String
Optional. Display name, shown in the Jupyter kernelspec card.
kernel This property is required. String
Optional. Kernel

PeripheralsConfig
, PeripheralsConfigArgs

MetastoreService string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
SparkHistoryServerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
MetastoreService string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
SparkHistoryServerConfig SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastoreService String
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastoreService string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastore_service str
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
spark_history_server_config SparkHistoryServerConfig
Optional. The Spark History Server configuration for the workload.
metastoreService String
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig Property Map
Optional. The Spark History Server configuration for the workload.

PeripheralsConfigResponse
, PeripheralsConfigResponseArgs

MetastoreService This property is required. string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
SparkHistoryServerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
MetastoreService This property is required. string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
SparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreService This property is required. String
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreService This property is required. string
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastore_service This property is required. str
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
spark_history_server_config This property is required. SparkHistoryServerConfigResponse
Optional. The Spark History Server configuration for the workload.
metastoreService This property is required. String
Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
sparkHistoryServerConfig This property is required. Property Map
Optional. The Spark History Server configuration for the workload.

PyPiRepositoryConfig
, PyPiRepositoryConfigArgs

PypiRepository string
Optional. PyPi repository address
PypiRepository string
Optional. PyPi repository address
pypiRepository String
Optional. PyPi repository address
pypiRepository string
Optional. PyPi repository address
pypi_repository str
Optional. PyPi repository address
pypiRepository String
Optional. PyPi repository address

PyPiRepositoryConfigResponse
, PyPiRepositoryConfigResponseArgs

PypiRepository This property is required. string
Optional. PyPi repository address
PypiRepository This property is required. string
Optional. PyPi repository address
pypiRepository This property is required. String
Optional. PyPi repository address
pypiRepository This property is required. string
Optional. PyPi repository address
pypi_repository This property is required. str
Optional. PyPi repository address
pypiRepository This property is required. String
Optional. PyPi repository address

RepositoryConfig
, RepositoryConfigArgs

PypiRepositoryConfig PyPiRepositoryConfig
Optional. Configuration for PyPi repository.
pypiRepositoryConfig PyPiRepositoryConfig
Optional. Configuration for PyPi repository.
pypiRepositoryConfig PyPiRepositoryConfig
Optional. Configuration for PyPi repository.
pypi_repository_config PyPiRepositoryConfig
Optional. Configuration for PyPi repository.
pypiRepositoryConfig Property Map
Optional. Configuration for PyPi repository.

RepositoryConfigResponse
, RepositoryConfigResponseArgs

PypiRepositoryConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
PypiRepositoryConfig This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypiRepositoryConfig This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypiRepositoryConfig This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypi_repository_config This property is required. PyPiRepositoryConfigResponse
Optional. Configuration for PyPi repository.
pypiRepositoryConfig This property is required. Property Map
Optional. Configuration for PyPi repository.

RuntimeConfig
, RuntimeConfigArgs

ContainerImage string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
Properties Dictionary<string, string>
Optional. A mapping of property names to values, which are used to configure workload execution.
RepositoryConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.RepositoryConfig
Optional. Dependency repository configuration.
Version string
Optional. Version of the batch runtime.
ContainerImage string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
Properties map[string]string
Optional. A mapping of property names to values, which are used to configure workload execution.
RepositoryConfig RepositoryConfig
Optional. Dependency repository configuration.
Version string
Optional. Version of the batch runtime.
containerImage String
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties Map<String,String>
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig RepositoryConfig
Optional. Dependency repository configuration.
version String
Optional. Version of the batch runtime.
containerImage string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties {[key: string]: string}
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig RepositoryConfig
Optional. Dependency repository configuration.
version string
Optional. Version of the batch runtime.
container_image str
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties Mapping[str, str]
Optional. A mapping of property names to values, which are used to configure workload execution.
repository_config RepositoryConfig
Optional. Dependency repository configuration.
version str
Optional. Version of the batch runtime.
containerImage String
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties Map<String>
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig Property Map
Optional. Dependency repository configuration.
version String
Optional. Version of the batch runtime.

RuntimeConfigResponse
, RuntimeConfigResponseArgs

ContainerImage This property is required. string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, which are used to configure workload execution.
RepositoryConfig This property is required. Pulumi.GoogleNative.Dataproc.V1.Inputs.RepositoryConfigResponse
Optional. Dependency repository configuration.
Version This property is required. string
Optional. Version of the batch runtime.
ContainerImage This property is required. string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, which are used to configure workload execution.
RepositoryConfig This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
Version This property is required. string
Optional. Version of the batch runtime.
containerImage This property is required. String
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
version This property is required. String
Optional. Version of the batch runtime.
containerImage This property is required. string
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
version This property is required. string
Optional. Version of the batch runtime.
container_image This property is required. str
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, which are used to configure workload execution.
repository_config This property is required. RepositoryConfigResponse
Optional. Dependency repository configuration.
version This property is required. str
Optional. Version of the batch runtime.
containerImage This property is required. String
Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
properties This property is required. Map<String>
Optional. A mapping of property names to values, which are used to configure workload execution.
repositoryConfig This property is required. Property Map
Optional. Dependency repository configuration.
version This property is required. String
Optional. Version of the batch runtime.

SparkHistoryServerConfig
, SparkHistoryServerConfigArgs

DataprocCluster string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
DataprocCluster string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataproc_cluster str
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

SparkHistoryServerConfigResponse
, SparkHistoryServerConfigResponseArgs

DataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
DataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. string
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataproc_cluster This property is required. str
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
dataprocCluster This property is required. String
Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi