Skip to main content

The home of developer docs at

The Developer’s Guide To Palo Alto Networks Cloud NGFW for AWS

The Developer’s Guide To Palo Alto Networks Cloud NGFW for AWS

By: Migara Ekanayake

Photo by Glen Carrie on Unsplash

Busy modernizing your applications? One thing you can’t cut corners on is the security aspect. Today, we will discuss network security — inserting inbound, outbound, and VPC-to-VPC security for your traffic flows, to be precise, ​​without compromising DevOps speed and agility. When it comes to network security for cloud-native applications, it’s challenging to find a cloud-native security solution that provides the best in class NGFW security while consuming security as a cloud-native service. This means developers have to compromise security and find a solution that fits their development needs. That’s no longer the case — today, we will look at how you can have your cake and eat it too!

Infrastructure-as-Code is one of the key pillars in the application modernization journey, and there is a wide range of tools you can choose from. Terraform is one of the industry’s widely adopted infrastructure-as-code tools to shift from manual, error-prone provisioning to automated provisioning at scale. And, we firmly believe that it is crucial to be able to provision and manage your cloud-native security using Terraform next to your application code where it belongs. We have decided to provide launch day Terraform support for Palo Alto Networks Cloud NGFW for AWS with our brand new cloudngfwaws Terraform provider, allowing you to perform day-0, day-1, and day-2 tasks. You can now consume our Cloud NGFW with the tooling you are already using without leaving the interfaces you are familiar with; it’s that simple!

Getting Started


AWS Architecture

We will focus on securing an architecture similar to the topology below. Note the unused Firewall Subnet — later, we will deploy the Cloud NGFW endpoints into this subnet and make the necessary routing changes to inspect traffic through the Cloud NGFW.

Application Architecture

Authentication and Authorization

Enable Programmatic Access

To use the Terraform provider, you must first enable the Programmatic Access for your Cloud NGFW tenant. You can check this by navigating to the Settings section of the Cloud NGFW console. The steps to do this can be found here.

You will authenticate against your Cloud NGFW by assuming roles in your AWS account that are allowed to make API calls to the AWS API Gateway service. The associated tags with the roles dictate the type of Cloud NGFW programmatic access granted — Firewall Admin, RuleStack Admin, or Global Rulestack Admin.

The following Terraform configuration will create an AWS role which we will utilize later when setting up the cloudngfwaws Terraform provider.

Setting Up The Terraform Provider

In this step, we will configure the Terraform provider by specifying the ARN of the role we created in the previous step. Alternatively, you can also specify individual Cloud NGFW programmatic access roles via lfa-arn, lra-arn, and gra-arn parameters.

Note how Terraform provider documentation specifies Admin Permission Type required for each Terraform resource as Firewall, Rulestack, or Global Rulestack. You must ensure the Terraform provider is configured with an AWS role(s) that has sufficient permission(s) to use the Terraform resources in your configuration file.

Rulestacks and Cloud NGFW Resources

There are two fundamental constructs you will discover throughout the rest of this article — Rulestacks and Cloud NGFW resources.

A rulestack defines the NGFW traffic filtering behavior, including advanced access control and threat prevention — simply a set of security rules and their associated objects and security profiles.

Cloud NGFW resources are managed resources that provide NGFW capabilities with built-in resilience, scalability, and life-cycle management. You will associate a rulestack to an NGFW resource when you create one.

Deploying Your First Cloud NGFW Rulestack

First, let’s start by creating a simple rulestack, and we are going to use the BestPractice Anti Spyware profile. BestPractice profiles are security profiles that come built-in, which will make it easier for you to use security profiles from the start. If required, you can also create custom profiles to meet your demands.

The next step is to create a security rule that only allows HTTP-based traffic and associate that with the rulestack we created in the previous step. Note that we use the App-ID web-browsing instead of traditional port-based enforcement.

Committing Your Rulestack

Once the rulestack is created, we will commit the rulestack before assigning it to an NGFW resource.

Note: cloudngfwaws_commit_rulestack should be placed in a separate plan as the plan that configures the rulestack and its contents. If you do not, you will have perpetual configuration drift and need to run your plan twice so the commit is performed.

Deploying Your First Cloud NGFW Resource

Traffic to and from your resources in VPC subnets is routed through to NGFW resources using NGFW endpoints. How you want to create these NGFW endpoints is determined based on the endpoint mode you select when creating the Cloud NGFW resource.

  • ServiceManaged — Creates NGFW endpoints in the VPC subnets you specify
  • CustomerManaged — Creates just the NGFW endpoint service in your AWS account, and you will have the flexibility to create NGFW endpoints in the VPC subnets you want later.

In this example, we are going to choose the ServiceManaged endpoint mode. Also, notice how we have specified the subnet_mapping property. These are the subnets where your AWS resources live that you want to protect.

In production, you may want to organize these Terraform resources into multiple stages of your pipeline — first, create the rulestack and its content, and proceed to the stage where you will commit the rulestack and create the NGFW resource.

At this point, you will have a Cloud NGFW endpoint deployed into your Firewall subnet.

You can retrieve the NGFW endpoint ID to Firewall Subnet mapping via cloudngfwaws_ngfw Terraform data resource. This information is required during route creation in the next step.

Routing Traffic via Cloud NGFW

The final step is to add/update routes to your existing AWS route tables to send traffic via the Cloud NGFW. The new routes are highlighted in the diagram below. Again, you can perform this via aws_route or aws_route_table Terraform resource.

Learn more about Cloud NGFW

In this article, we discovered how to deploy Cloud NGFW in the Distributed model. You can also deploy Cloud NGFW in a Centralized model with AWS Transit Gateway. The Centralized model will allow you to run Cloud NGFW in a centralized “inspection” VPC and connect all your other VPCs via Transit Gateway.

We also discovered how to move away from traditional port-based policy enforcement and move towards application-based enforcement. You can find a comprehensive list of available App-IDs here.

There is more you can do with Cloud NGFW.

  • Threat prevention — Automatically stop known malware, vulnerability exploits, and command and control infrastructure (C2) hacking with industry-leading threat prevention.
  • Advanced URL Filtering — Stop unknown web-based attacks in real-time to prevent patient zero. Advanced URL Filtering analyzes web traffic, categorizes URLs, and blocks malicious threats in seconds.

Cloud NGFW for AWS is a regional service. Currently, it is available in the AWS regions enumerated here. To learn more, visit the documentation and FAQ pages. To get hands-on experience with this, please subscribe via the AWS Marketplace page.

The Developer’s Guide To Palo Alto Networks Cloud NGFW for AWS was originally published in Palo Alto Networks Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.

Dynamic Firewalling with Palo Alto Networks NGFWs and Consul-Terraform-Sync

Dynamic Firewalling with Palo Alto Networks NGFWs and Consul-Terraform-Sync

By: Migara Ekanayake

I cannot reach…!! Must be the firewall!!

Sounds familiar? We’ve all been there. If you were that lonely firewall administrator who tried to defend the good ol’ firewall, congratulations for making it this far. Life was tough back then when you only had a handful of firewall change requests a day, static data centres and monolithic applications.

Fast forward to the present day, we are moving away from traditional static datacentres to modern dynamic datacentres. Applications are no longer maintained as monoliths, they are arranged into several smaller functional components (aka Microservices) to gain development agility. As you gain development agility you introduce new operational challenges.

What are the Operational Challenges?

Now that you have split your monolithic application into a dozen of microservices, most likely you are going to have multiple instances of each microservice to fully realise the benefits of this app transformation exercise. Every time you want to bring up a new instance, you open a ticket and wait for the firewall administrator to allow traffic to that new node, this could take days if not weeks.

Photo by Erik Mclean on Unsplash

When you add autoscaling into the mix (so that the number of instances can dynamically scale in and out based on your traffic demands) having to wait days or weeks before traffic can reach those new instances defeats the whole point of having the autoscaling in the first place.

Long live agility!

The Disjoint

Traditionally, firewall administrators used to retrieve the requests from their ticketing system and implement those changes via UI or CLI during a maintenance window. This results in creating an impedance mismatch with the application teams and overall slower delivery of the solutions and can also introduce an element of human error during the ticket creation process as well as the implementation of this request. If you are a network/security administrator who has recognised these problems, the likelihood is that you have already written some custom scripts and/or leveraged a configuration management platform to automate some of these tasks. Yes, this solves the problem to a certain extent, still, there is a manual handoff task between Continuous Delivery and Continuous Deployment.

The DevOps Shortfall


Network and security teams can solve these challenges by enabling dynamic service-driven network automation with self-service capabilities using an automation tool that supports multiple networking technologies.

HashiCorp and Palo Alto Networks recently collaborated on a strategy for this using HashiCorp’s Network Infrastructure Automation (NIA). This works by triggering a Terraform workflow that automatically updates Palo Alto Networks NGFWs or Panorama based on changes it detects from the Consul catalog.

Under the hood, we are leveraging Dynamic Address Groups (DAGs). In PAN-OS it is possible to dynamically associate (and remove) tags from IP addresses, using several ways, including the XML API. A tag is simply a string that can be used as match criteria in Dynamic Address Groups, allowing the Firewall to dynamically allow/block traffic without requiring a configuration commit.

If you need a refresher on DAGs here is a great DAG Quickstart guide.

Scaling Up with Panorama

The challenge for large-scale networks is ensuring every firewall that enforces policies has the IP address-to-tag mappings for your entire user base.

If you are managing your Palo Alto Networks NGFWs with Panorama you can redistribute IP address-to-tag mappings to your entire firewall estate within a matter of seconds. This could be your VM-Series NGFWs deployed in the public cloud, private cloud, hybrid cloud or hardware NGFWs in your datacentre.

IP address-to-tag mappings redistribution

What’s Next?

If you have read this far, why not give this step-by-step guide on how you can automate the configuration management process for Palo Alto Networks NGFWs with Consul-Terraform-Sync a try?.

For more resources, check out:

Dynamic Firewalling with Palo Alto Networks NGFWs and Consul-Terraform-Sync was originally published in Palo Alto Networks Developers on Medium, where people are continuing the conversation by highlighting and responding to this story. and the Rise of Docs-as-Code and the Rise of Docs-as-Code

By: Steven Serrata

It started out as a simple idea — to improve the documentation experience for the many open-source projects at Palo Alto Networks. Up until that point (and still to this day) our open source projects were documented in the traditional fashion: the ubiquitous and the occasional Read the Docs or GitHub Pages site. In truth, there is nothing inherently “wrong” with this approach but there was a sense that we could do more to improve the developer experience of adopting our APIs, tools and SDKs. So began our exploration into understanding how companies like Google, Microsoft, Facebook, Amazon, Twilio, Stripe, et al., approached developer docs. Outside of visual aesthetics, it was clear that these companies invested a great deal of time into streamlining their developer onboarding experiences, something we like to refer to as “time-to-joy.” The hard truth was that our product documentation site was catering to an altogether different audience and it was evident in the fact that features like quick starts (“Hello World”), code blocks, and interactive API reference docs were noticeably absent.

After some tinkering we had our first, git-backed developer portal deployed using Gatsby and AWS Amplify, but that was only the beginning.

Great Gatsby!

Circa March 2019. Although we technically weren’t new to Developer Relations as a practice, it was the first time we made an honest attempt to solve for the lack of developer-style documentation at Palo Alto Networks. We quickly got to work researching how other companies delivered their developer sites and received a tip from a colleague to look at Gatsby — a static-site generator based on GraphQL and ReactJS. After some tinkering, we had our first git-backed developer portal deployed using AWS Amplify, but that was only the beginning. It was a revelation that git-backed, web/CDN hosting services existed and that rich, interactive, documentation sites could be managed like code. It wasn’t long before we found Netlify, Firebase, and, eventually, Docusaurus — a static-site generator specializing in the documentation use case. It was easy to pick up and run with this toolchain as it seamlessly weaved together the open-source git flows we were accustomed to with the ability to rapidly author developer docs using markdown and MDX — a veritable match made in heaven. We had the stack — now all we needed was a strategy.

At a deeper level, it’s a fresh, bold new direction for a disruptive, next-generation security company that, through strategic acquisitions and innovation, has found itself on the doorsteps of the API Economy.

For Developers, by Developers

At the surface, is a family of sites dedicated to documentation for developers, by developers. At a deeper level, it’s a fresh, bold, new direction for a disruptive, next-generation security company that, through strategic acquisitions and innovation, has found itself on the doorsteps of the API Economy (more on that topic coming soon!). overview

The content published on is for developers because it supports and includes the features that developer audiences have come to expect, such as code blocks and API reference docs (and dark mode!):

Code blocks on

It’s by developers since the contributing, review and publishing flows are backed by git and version control systems like GitLab and GitHub. What’s more, by adopting git as the underlying collaboration tool, is capable of supporting both closed and open-source contributions. Spot a typo or inaccuracy? Open a GitHub issue. Got a cool pan-os-python tutorial you’d like to contribute? Follow our contributing guidelines and we’ll start reviewing your submission. By adopting developer workflows, is able to move developer docs closer to the source (no pun intended). That means we can deliver high-quality content straight from the minds of the leading automation, scripting, DevOps/SecOps practitioners in the cybersecurity world.

Markdown demo

So, What’s Next?

If we’ve learned anything, over the years, it’s that coming up with technical solutions might actually be the easier aspect of working on the project. What can be more difficult (and arguably more critical) is garnering buy-in from stakeholders, partners and leadership, while adapting to the ever-evolving needs of the developer audience we’re trying to serve. All this while doing what we can to ensure the reliability and scalability of the underlying infrastructure (not to mention SEO, analytics, accessibility, performance, i18n, etc.). The real work has only recently begun and we’re ecstatic to see what the future holds. Already, we’ve seen a meteoric rise in monthly active users (MAUs) across sites (over 40K!) and is well-positioned to be the de facto platform for delivering developer documentation at Palo Alto Networks.

To learn more and experience it for yourself, feel free to tour our family of developer sites and peruse the GitHub Gallery of open-source projects at Palo Alto Networks.

For a deeper dive into the toolchain/stack, stay tuned for part 2! and the Rise of Docs-as-Code was originally published in Palo Alto Networks Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.