Palo Alto Networks Network Load Balancer Module for AWS
A Terraform module for deploying a Network Load Balancer in AWS cloud. This can be used both as a public facing Load Balancer (to balance incoming traffic to Firewalls) or as an internal Load Balancer (to balance traffic from Firewalls to the actual application.)
Usage
For example usage please refer to the Centralized Design, Combined Design or Isolated Design examples.
Reference
Requirements
Name | Version |
---|---|
terraform | >= 1.0.0, < 2.0.0 |
aws | ~> 5.17 |
Providers
Name | Version |
---|---|
aws | ~> 5.17 |
Modules
No modules.
Resources
Name | Type |
---|---|
aws_eip.this | resource |
aws_lb.this | resource |
aws_lb_listener.this | resource |
aws_lb_target_group.this | resource |
aws_lb_target_group_attachment.this | resource |
aws_s3_bucket.this | resource |
aws_s3_bucket_acl.this | resource |
aws_s3_bucket_policy.this | resource |
aws_s3_bucket_public_access_block.this | resource |
aws_s3_bucket_server_side_encryption_configuration.this | resource |
aws_s3_bucket_versioning.this | resource |
aws_elb_service_account.this | data source |
aws_iam_policy_document.this | data source |
aws_partition.this | data source |
aws_s3_bucket.this | data source |
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
access_logs_byob | Bring Your Own Bucket - in case you would like to re-use an existing S3 Bucket for Load Balancer's access logs. NOTICE. This code does not set up proper Bucket Policies for existing buckets. They have to be already in place. | bool | false | no |
access_logs_s3_bucket_name | Name of an S3 Bucket that will be used as storage for Load Balancer's access logs. When used with configure_access_logs it becomes the name of a newly created S3 Bucket.When used with access_logs_byob it is a name of an existing bucket. | string | "pantf-alb-access-logs-bucket" | no |
access_logs_s3_bucket_prefix | A path to a location inside a bucket under which access logs will be stored. When omitted defaults to the root folder of a bucket. | string | null | no |
balance_rules | An object that contains the listener, target group, and health check configuration. It consist of maps of applications like follows:balance_rules = { "application_name" = { protocol = "communication protocol, since this is a NLB module accepted values are TCP or TLS" port = "communication port" target_type = "type of the target that will be attached to a target group, no defaults here, has to be provided explicitly (regardless the defaults terraform could accept)" target_port = "for target types supporting port values, the port number on which the target accepts communication, defaults to the communication port value" targets = "a map of targets, where key is the target name (used to create a name for the target attachment), value is the target ID (IP, resource ID, etc - the actual value depends on the target type)" target_az = "This parameter is not supported if the target type of the target group is instance or alb. If the target type is ip and the IP address is outside the VPC, this parameter is required." health_check_port = "port used by the target group healthcheck, if ommited, traffic-port will be used"threshold = "number of consecutive health checks before considering target healthy or unhealthy, defaults to 3" interval = "time between each health check, between 5 and 300 seconds, defaults to 30s" certificate_arn = "(TLS ONLY) this is the arn of a certificate" alpn_policy = "(TLS ONLY) ALPN policy name, for possible values check (terraform documentation)[https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_listener#alpn_policy], defaults to None "} }The application_name key is valid only for letters, numbers and a dash (- ) - that's an AWS limitation.protocol and port are used for listener , target group and target group attachment . Partially also for health checks (see below).All listeners are always of forward action. If you add FWs as targets, make sure you use target_type = "ip" and you provide the correct FW IPs in target map. IPs should be from the subnet set that the Load Balancer was created in. An example on how to feed this variable with data:fw_instance_ips = { for k, v in var.vmseries : k => module.vmseries[k].interfaces["untrust"].private_ip }For format of var.vmseries check the (vmseries module)[../vmseries/README.md]. The key is the VM name. By using those keys, we can loop through all vmseries modules and take the private IP from the interface that is assigned to the subnet we require. The subnet can be identified by the subnet set name (like above). In other words, the for loop returns the following map:{vm01 = "1.1.1.1" vm02 = "2.2.2.2" ... } Healthchecks are by default of type TCP. Reason for that is the fact, that HTTP requests might flow through the FW to the actual application. So instead of checking the status of the FW we might check the status of the application. You have an option to specify a health check port. This way you can set up a Management Profile with an Administrative Management Service limited only to NLBs private IPs and use a port for that service as the health check port. This way you make sure you separate the actual health check from the application rule's port. EXAMPLEbalance_rules = { "HTTPS-APP" = { protocol = "TCP" port = "443" health_check_port = "80" threshold = 2 interval = 10 target_port = 8443 target_type = "ip" targets = { for k, v in var.vmseries : k => module.vmseries[k].interfaces["untrust"].private_ip } target_az = "all" stickiness = true } } | any | n/a | yes |
configure_access_logs | Configure Load Balancer to store access logs in an S3 Bucket. When used with access_logs_byob set to false forces creation of a new bucket.If, however, access_logs_byob is set to true an existing bucket can be used.The name of the newly created or existing bucket is controlled via access_logs_s3_bucket_name . | bool | false | no |
create_dedicated_eips | If set to true , a set of EIPs will be created for each zone/subnet. Otherwise AWS will handle IP management. | bool | false | no |
enable_cross_zone_load_balancing | Enable load balancing between instances in different AZs. Defaults to true .Change to false only if absolutely necessary. By default, there is only one FW in each AZ.Turning this off means 1:1 correlation between a public IP assigned to an AZ and a FW deployed in that AZ. | bool | true | no |
internal_lb | Determines if this Load Balancer will be a public (default) or an internal one. | bool | false | no |
name | Name of the Load Balancer to be created, must be less or equal to 32 char. | string | n/a | yes |
security_groups | A list of security group IDs to use with a Load Balancer. If security groups are created with a VPC module you can use output from that module like this:security_groups = [module.vpc.security_group_ids["load_balancer_security_group"]]For more information on the load_balancer_security_group key refer to the VPC module documentation. | list(string) | [] | no |
subnets | Map of subnets used with a Network Load Balancer. Each map's key is the availability zone name and the value is an object that has an attributeid identifying AWS subnet.Examples: You can define the values directly:subnets = { "us-east-1a" = { id = "snet-123007" } "us-east-1b" = { id = "snet-123008" } }You can also use output from the subnet_sets module:subnets = { for k, v in module.subnet_sets["untrust"].subnets : k => { id = v.id } } | map(object({ id = string })) | n/a | yes |
tags | Map of AWS tags to apply to all the created resources. | map(string) | {} | no |
vpc_id | ID of the security VPC the Load Balancer should be created in. | string | n/a | yes |
Outputs
Name | Description |
---|---|
lb_fqdn | A FQDN for the Load Balancer. |
target_group | n/a |