Launching AWS Payment Cryptography

AWS announced the launch of AWS Payment Cryptography this week, and I’ve had the opportunity to lead the service from its earliest definition through production launch. The official AWS announcement is here: https://aws.amazon.com/about-aws/whats-new/2023/06/aws-payment-cryptography/.

This was one of those projects where the hard part was never just building software. The challenge was defining a service that could meet the expectations of payment processors, issuers, and financial institutions who were used to a vastly different interaction model while operating inside the security, compliance, and operational standards required for payment cryptography.

My role started at the beginning: taking early customer input, writing the initial business requirements, and helping shape the architecture that would eventually become the service. That meant defining the threat model, establishing the security posture, and making early decisions around control-plane boundaries, data-plane design, hardware integration, and how HSM-backed infrastructure would operate inside AWS.

I also led the evaluation and selection of the HSM platform itself. That work involved deep vendor evaluation of the big three payment HSM vendors, prototype testing, operational modeling, and understanding what would actually work for a managed cloud service rather than simply replicating traditional on-premises approaches. 

As the service moved toward launch, a major focus became operational discipline. Observability, operational reviews, and HSM fleet health management were critical to making sure the system would hold up under real customer use, not just pass a design review. Several of the hardware-backed design patterns established during this work are already proving useful beyond this single service.

Launching AWS Payment Cryptography has been one of the most meaningful things I’ve worked on. It was a rare opportunity to help build something from ambiguity to durable production, where architecture, security, and execution all had to hold together at the same time.

EBS Key Rotation Strategies for KMS Master Keys

The AWS Key Management Service (KMS) provides a capability to manage encryption keys with transparent integration to many other AWS services. Of particular significance is transparent data at rest encryption for the AWS Elastic Block Store (EBS) service. When using KMS encryption, the data stream from the underlying storage medium is encrypted/decrypted at the hypervisor level. KMS is a regional AWS account bound service leveraging software key generation and underlying Hardware Security Module (HSM) appliances for encrypted storage of key material. Keys are decrypted as needed, stored in memory for the duration of the operation and then immediately erased from memory. The end user has the ability to effect controls over access to keys used for customer data encryption. As part of a security management plan, the customer may desire or have the requirement to effect a key rotation strategy. Keys may require rotation by policy with a defined validity period or could require rotation due to compromise. AWS encourages customers to encrypt all data both in motion and at rest. This ensures customer privacy while providing an ability to crypto erase data by deleting its encryption key as a mitigation to data spill or need to enforce deleted data protection.

Envelope Encryption

KMS uses an envelope encryption scheme providing two layers of protection for customer data and enabling access controls for the encrypted customer data itself. A data key is stored alongside the customer data that it encrypts in an envelope; containing key and cypher text. The data key is encrypted with a Customer Master Key (CMK) that has management facilities provided via the AWS Console, CLI and SDKs. In order to decrypt a dataset, an end user must have access to the CMK used to encrypt the data key with a forensic audit trail. Without the CMK, a data key cannot be decrypted and hence the underlying customer data cannot be decrypted. The CMK is itself encrypted by an AWS master key stored with physical security controls audited by third party for recovery of CMKs from the underlying HSM if a full region outage were ever to occur and the in-memory copy were lost. There are two different types of CMKs supported by KMS with the difference being ley material origin, these are an AWS generated CMK, “AWS managed CMK”, and a customer imported external CMK “customer managed CMK”.

EBS Service, Snapshots and AMIs

The EBS service provides block storage in the form of mountable volumes for Elastic Compute Cloud (EC2) instances. These volumes are similar to logical volumes presented to a virtual machine with underlying SAN storage. An EBS volume backup can be taken using a Snapshot feature that copies backups of blocks that have changed since the prior snapshot point offering an ability to capture the point in time contents of a volume. Snapshots are taken from an EBS volume and can be used to create new copies of that volume with identical data as of a given point in time. As such, they serve as the primary backup mechanism for EC2 instances as part of a disaster recoverability plan. Snapshots however are without instance configuration as they represent only a single volume and not the machine to which the source volume was attached. For this purpose , AWS offers an instance imaging feature which captures that instance’s current configuration and creates snapshots of all volumes attached to that instance producing what’s called an Amazon Machine Image (AMI). AMIs and Snapshots are store for you in an AWS managed S3 construct making both a regional service component. A copy feature is provided for both an AMI, really just an abstraction of the snapshots tied to the AMI, and an individual snapshot that allows the enablement of encryption and selection of a master key. This selection causes KMS to generate a data key for each of the specific snapshots in question, which is used to encrypt the customer data.

KMS AWS Managed CMK Behaviors

An AWS managed CMK is generated by and populated with key material through the software constructs of the KMS service. A default AWS managed key is automatically generated at the point of AWS account creation for each service that offers native KMS integration across each AWS region. AWS managed CMKs offer an automated rotation feature, when enabled, with a 1-year life cycle. That rotation process would cause the generation of new key material and the archival of old key material within that CMK. This old, deprecated key material is maintained to facilitate decryption of data keys previously encrypted with that key material. The new key material is then used for all encryption operations on customer data for new data sets going forward. In this process, the data keys for customer data are not rotated and existing data keys are not decrypted and re-encrypted with the new key material.

KMS Customer Managed CMK Behaviors

A customer managed CMK is generated as an empty container to which a customer can import a 256-bit symmetric encryption key. The advantage here being that the customer can generate a key either in software or thru their own hardware key management tools. The customer must maintain a copy of that key within their own IT infrastructure and is solely responsible for recovery of that key in the event of a full AWS region outage by re-importing the key material to the CMK. A customer managed CMK can only contain a single set of key material and does not have the capability to manage a history of key material as the AWS managed CMKs do. The ability to re-import key material to such a CMK is provided purely for recovery purposes and if different key material were ever uploaded, all customer data with the CMK in its envelope chain previously encrypted by that CMK would not be decryptable.

Key Rotation Strategies

When using AWS managed CMKs, the customer has minimal control over the deprecation of legacy key material. Though automated rotation does phase out the future use of legacy key material, it does not cause a re-encryption of data keys and customer data remains encrypted with the same data key. Use of customer managed CMKs enables a greater degree of control over key material source but places a responsibility on the customer to implement a key rotation strategy. In each scenario, there are strategies, leveraging behaviors of the KMS service to force certain desired rotation outcomes.

Rotating Only A CMK

For either an AWS managed or customer managed CMK, a customer can effect complete master key rotation. In the case of AWS managed CMKs, this includes phasing out use of legacy key material. A strategy to achieve this outcome would be to initiate a copy of source customer data within the same region and account. This does not cause a change to data keys or cause customer data re-encryption however; it does cause data keys to be re-encrypted with the new CMK.

Rotating Both A CMK and Data Key

For either an AWS managed or customer managed CMK, a customer can effect complete master and data key rotation. In the case of AWS managed CMKs, this would completely phase out use of legacy key material. A strategy to achieve this outcome would be to initiate a copy of source customer data that crosses an AWS Account and/or AWS Region boundary and then additional copy back to the source. This will cause a new data key to be generated at the destination and enable the selection of an arbitrary CMK at the destination. Note that this will require configuration of KMS CMK access permissions, see https://aws.amazon.com/blogs/aws/new-cross-account-copying-of-encrypted-ebs-snapshots/. Following the reverse process to copy data back to the source will cause a new data key to be generated in the source AWS Account/AWS Region for the customer data and enable the selection of a new CMK in the source AWS Account/AWS Region. Cross region copies would result in cross region data transfer charges.

Conclusion

It is possible to implement a key rotation strategy that meet security and/or compliance requirements through manipulation of AWS service behaviors. The rule of thumb being that data copy actions enable new key selection and crossing an AWS account or AWS Region boundary causes customer data re-encryption with new keys.

Building a Stronger AWS Developer Community

The cloud market continues to get more competitive.

AWS built its early success by winning developers first. Startups and builders were willing to take risks, move fast, and trust new platforms if those platforms made their jobs easier. AWS became the default choice because it gave developers access to infrastructure and services that previously required enormous capital, time, and operational overhead.

That advantage should not be taken for granted.

As the market matures, competitors are closing the gap. Microsoft in particular understands something important: developers and operations professionals are often the real catalyst for cloud adoption inside an enterprise. CIOs, CTOs, and procurement teams may approve the decision, but developers and operators heavily influence which technologies ever make it that far.

If AWS wants to maintain its leadership position, we cannot rely only on top-down executive relationships. We must continue to win the people who actually build and operate systems.

Developers Drive Adoption

Developers and operations professionals make a staggering number of technology decisions.

They are the people looking for better, faster, and more reliable ways to do their jobs. They are also the people who inherit the responsibility when a platform goes into production. Because of that, they tend to trust what they know, avoid unnecessary complexity, and remain loyal to tools and vendors that consistently make their lives easier.

This is one of Microsoft’s greatest strengths.

Microsoft developers and system administrators are often deeply loyal to the Microsoft ecosystem because Microsoft has spent decades investing directly in that relationship. Visual Studio is not just an IDE. It is a platform for influence. Azure services are integrated directly into the development workflow, making Azure feel like the natural cloud choice for many Microsoft developers.

Microsoft has also built dedicated developer and cloud evangelist teams whose sole focus is outreach: creating content, participating in technical communities, supporting conferences, engaging in developer forums, and helping customers solve real implementation problems.

They are not just selling products. They are building loyalty.

AWS Is Drifting Away From Its Original Strength

AWS started by gaining the trust of developers.

Our strategy today has shifted heavily toward executive buy-in: CIOs, CISOs, CTOs, and enterprise leadership. That matters, but it is incomplete.

Our public messaging often targets too broad an audience and lacks the technical depth developers actually need. Developers look past polished marketing language very quickly. They test services, identify limitations, and decide whether a platform will create operational pain or reduce it.

A shiny message does not survive first contact with implementation.

When developers struggle to understand how to use a service, when documentation is fragmented, or when the simplest path is unclear, we create friction that competitors can use against us.

Developers do not choose the most powerful solution. They often choose the clearest one.

Complexity Is Becoming a Competitive Weakness

AWS became successful because it enabled access.

We made computing power available on demand, and we created a platform flexible enough to support nearly any use case. That flexibility became one of our greatest strengths, but it also created a problem: complexity.

We now offer a portfolio of services with countless configuration options, overlapping capabilities, fragmented documentation, and too many places for customers to search for answers.

For a highly experienced engineer, this flexibility can be powerful.

For everyone else, it can be a barrier.

It is easy to get started with AWS. It is much harder to use AWS well.

Many customers do not want to make a second investment in becoming deep experts just to use the services they are already paying for. They want solutions that are clear, well-documented, and operationally manageable.

If we ignore that, our breadth becomes a deterrent rather than an advantage.

What AWS Needs

Building a stronger AWS developer community requires coordination and focus.

We already do many things well:

  • AWS Lofts

  • re:Invent

  • launch visibility at major conferences

  • strong introductory documentation

  • active forums and blogs

  • broad service coverage

  • a strong technical brand

The problem is fragmentation.

Documentation lives in one place. Examples live somewhere else. Questions are answered somewhere else. Training is disconnected. Developer resources are difficult to discover unless customers already know where to look.

We need a true developer center.

We need developer outreach that feels intentional, not accidental.

That means:

  • a clear and visible AWS developer community

  • better organization of documentation, examples, forums, and training

  • stronger developer-specific events and outreach

  • deeper documentation with real-world examples across all major use cases

  • stronger investment in tools and integrations that make AWS a natural part of the development cycle

  • complete service UI and operational workflows, not just APIs

  • SDK and language champions who represent specific developer communities

  • less fragmentation and overlap across AWS services

  • a stronger emphasis on ease of use in roadmap decisions

Ease of use should be treated as a product feature, not an afterthought.

The Real Competitive Battle

Cloud competition is not just about features.

It is about trust.

Developers trust the platforms that help them succeed under pressure. They trust the vendors whose documentation works at 2 a.m. They trust the tools that reduce ambiguity instead of creating more of it.

That trust creates loyalty, and loyalty creates long-term platform adoption.

AWS won early by earning that trust.

We should make sure we do not lose it.

Amazon Web Services (AWS) Certified Security Specialty (CSS) Beta Exam

*** NOTE: AWS has pulled this specific certification version, refunding those who took the original beta exam. ***

2026 Status Update: The AWS Certified Security - Specialty is now a mature, standard certification. While the beta period mentioned below ended years ago, the core focus on deep-dive security across IAM, Encryption, and Incident Response remains the primary objective of the current exam version.

I had the opportunity to take the AWS Certified Security Specialty Exam at re:Invent 2016. The exam was in a beta phase where questions were being tested, refined, and the exam pass line was being set. While I can't share actual exam questions, I can share advice for others interested in the certification path.

Preparation

I entered the exam cold, drawing only on my working knowledge of AWS and its services, so my perspective is an unbiased view of the exam's difficulty. While blueprints change, the foundational security pillars remain consistent.

Format

  • Duration: ~3hr Exam Time
  • Volume: > 100 Questions (Beta format)
  • Style: Heavy focus on reading comprehension and identifying technical nuances.
  • Service Focus: High concentration on services with direct security relationships:
    • Identity & Access Management (IAM)
    • AWS WAF & Shield
    • CloudFront & ACM (Certificate Manager)
    • Security Groups, NACLs, and VPC Architecture

My Exam Perspective

I found the questions to be very long, requiring significant reading comprehension to answer accurately. The possible answers were also lengthy, requiring careful differentiation to identify the most valid technical solution. There were substantial parallels to security-related questions found on the Professional-level Architect exams.

Amazon Cognito User Pool Admin Authentication Flow with AWS SDK For .NET

Implementing the Amazon Cognito User Pool Admin Authentication Flow with AWS SDK For .NET offers a path to implement user authentication without management of a host components otherwise needed to signup, verify, store and authenticate a user. Though Cognito is largely framed as a mobile service, it is well suited to support web applications. In order to implement this process you would use the Admin Auth Flow outlined in the AWS produced slide below. This example assumes that you have already configured both a Cognito User Pool w/ an App, ensuring the "Enable sign-in API for server-based authentication (ADMIN_NO_SRP_AUTH)" is checked for that app on the App tab and that no App client secret is defined for that App. App client secrets are not supported in the .NET SDK. It is also assumed that a Federated Identity Pool is configured to point to the before mentioned User Pool.

This auth flow bypasses the use of Secure Remote Password (SRP) protocol protections heavily used by AWS to prevent passwords from even been sent over the wire. As a result, when used in a client server web application, your users passwords would be transmitted to the server and that communication must be encrypted with strong encryption to prevent compromise of user credentials. The below code implements a CognitoAdminAuthenticationProvider with Authenticate and GetCredentials members. The Authenticate method returns a wrapped ChallengeNameType and AuthenticationResultType set of responses. A challenge will only be returned if additional details are needed for authentication, in which case you would simply ensure those details are included in the UserCredentials provided to the authenticate method and call Authenticate again. Once authenticated, a AuthenticationResultType will be included in the result and can be used to call the GetCredentials method and obtain temporary AWS Credentials.

Usage of the above code would look something like the below. This example uses the temporary credentials to call S3 ListBuckets.

As an additional note, the options for the CognitoAWSCredentials Logins dictionary are listed below. This example uses the last listed value.

Logins: {
   'graph.facebook.com': '[FBTOKEN]',
   'www.amazon.com': '[AMAZONTOKEN]',
   'accounts.google.com': '[GOOGLETOKEN]',
   'api.twitter.com': '[TWITTERTOKEN]',
   'www.digits.com': '[DIGITSTOKEN]',
   'cognito-idp.[region].amazonaws.com/[your_user_pool_id]': '[id token]'
}

Getting Started with AWS Lambda C# Functions

For those of us that are .NET developers at heart, we have powerful tools for running serverless C# applications on AWS. AWS Lambda now officially supports .NET 10 as a managed runtime, providing long-term support (LTS) through November 2028.

Modern C# support in Lambda has evolved beyond early .NET Core. Developers can now utilize C# File-Based apps, which eliminate much of the traditional boilerplate code. These functions typically publish as Native AOT (Ahead-of-Time) by default, offering up to an 86% improvement in cold start times by removing the need for JIT compilation at runtime.

Prerequisites:

  1. Development Environment: Visual Studio 2022 (latest version) with the .NET 10 SDK installed.
  2. AWS Toolkit for Visual Studio: Install the latest extension from the Visual Studio Marketplace. It now includes Amazon Q Developer for AI-assisted coding and one-click publishing.

Getting Started with the .NET CLI:

The fastest way to scaffold a new function is using the Amazon Lambda Templates. You can install and create a file-based function with these commands:


dotnet new install Amazon.Lambda.Templates
dotnet new lambda.FileBased -n MyLambdaFunction

Key Project References:

  • Amazon.Lambda.Core: The foundational library for Lambda functions.
  • Amazon.Lambda.RuntimeSupport: Required for file-based apps and Native AOT.
  • Amazon.Lambda.Serialization.SystemTextJson: High-performance JSON serialization using source generators.

Using the AWS Toolkit, you can right-click your project and select "Publish to AWS Lambda" to deploy instantly. The toolkit handles the complexity of Native AOT container builds automatically if Docker is installed on your machine.

FISMA, FedRAMP and the DoD CC SRG: A Review of the US Government Cloud Security Policy Landscape

Note: The US Government cloud security policy landscape has changed significantly since this was originally written. I no longer have the first-hand, current context required to provide a meaningful update to this information, so please treat the following as a historical reference.

The Federal Information System Management Act (FISMA), a US Law signed in 2002, defines the information protection requirements for US Government, "government", data and is applicable to all information systems that process any government data regardless of ownership or control of such systems. Systems Integrators (SI) under contract to perform work for the government are almost always provided some government furnished information (GFI) or government furnished equipment(GFE) and FISMA requirements extend to the systems owned and/or operated by these SIs if they store or process government data. Government data always remains under the ownership of the source agency with that agency holding sole responsibility for determining the data's sensitivity level. It is usually a contractual requirement for an SI, charged with management of government data, to ensure FISMA compliance and an SI is obligated to destroy or return all GFI and GFE at the end of contractual period of performance. Government data falls into a number of information sensitivity categories ranging from public information to the highest of classification and the compliance requirements imposed by FISMA increase in lockstep with that sensitivity.

A large portion of government data under the management or control of most SI's will fall in the public or controlled unclassified information (CUI) buckets. Public data is rather straightforward in that it is publicly releasable and if compromised would have little to no impact on the public image, trust, security or mission of the owning government agency and/or its personnel and as such, requires the least compliance overhead. CUI on the other hand is significantly more complex and nuanced. CUI data could compromise the public image, trust, security or mission of the owning government agency and/or its personnel. As such, CUI data has some restriction applied to its distribution [https://www.archives.gov/cui/registry/category-list.html]. With Department of Defense (DoD) data, there are additional types of distribution restrictions defined in DoD Directive (DoDD) 5200.01 v4 [http://www.dtic.mil/whs/directives/corres/pdf/520001_vol4.pdf] and a host of marking requirements [http://www.dtic.mil/whs/directives/corres/pdf/520001_vol2.pdf]. A common misunderstanding of CUI requirements is that, due to its unclassified nature, it does not require significant security consideration. This misunderstanding is something to be cognoscente of in any engagement with government agency or SI relationship and it is advisable to inquire about CUI data restrictions as this area comes with certain legal as well as contractual ramifications.

Data sensitivity is a multifaceted factor that the National Institute of Science and Technology (NIST) breaks down to three areas; Confidentiality, Integrity and Availability “C-I-A category” resented in the format {x,x,x} where “x” will be; low, moderate or high. The highest denominator of these three categories determines the sensitivity and therefore compliance requirements of an information system. Determining the data sensitivity of an information system is a process defined in NIST special publication (SP) 800.60 volume 1 [http://csrc.nist.gov/publications/nistpubs/800-60-rev1/SP800-60_Vol1-Rev1.pdf]. This process starts with determining the types of data processed and/or stored by an information system. This is a critical step to ensure accurate compliance implementation. This will enable the selection of data type categories, defined in NIST SP 800.60 volume 2 [http://csrc.nist.gov/publications/nistpubs/800-60-rev1/SP800-60_Vol2-Rev1.pdf]. For each data category applicable to an information system NIST 800.60 volume 2 provides a baseline C-I-A category assessment as well as a number of caveats that could dictate a higher or lower categorization assessment for each of the C-I-A categories. An information owner can choose to adjust these assessments based on operational factors however; deviation from a C-I-A category baseline will require justification. This will result in a list of applicable data type categories and assessed C-I-A categorization for each. The highest categorization across the three C-I-A categories for all data types becomes the baseline level for the information system. The output of this process is generally a document describing the applicable data type categories and the assessed C-I-A categorization for each, with required justifications. This document generally requires review and signature by the system owner and an organizations authorization authority. Any change in data processed or stored by an information system should trigger a new iteration of this and all subsequent processes.

Compliance requirements come in the form of auditable states for various aspects of an information systems infrastructure, architectural design, implementation and the policies and practices, “governance”, established surrounding that systems management. There are generally two different types of controls; security controls and specific vendor product or process controls, "implementation controls". Security controls are high level and cover a broad requirement for an information system that often involve a number of physical implementation aspects and or process documentation components to meet. Implementation controls are often very specific requiring verification of state across multiple components to role-up to security control compliance. The NIST SP 800 series documents [http://csrc.nist.gov/publications/PubsSPs.html#SP 800], stemming from the need to define guidelines for compliance with FISMA and other laws, are the basis for government agency compliance programs. These programs draw from the security controls defined in NIST SP 800.53 [http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf] Appendix D. Organizations across the government are responsible for implementation of their own security and compliance programs as required by FISMA. As a result, processes vary across agencies, though most are implementations of NIST SP 800.37 [http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r1.pdf] that describes a process called the Risk Management Framework (RMF). RMF is a risk-based approach to addressing information technology (IT) security with emphasis placed on control compliance priority and assessing the overall risk posed by a system. NIST SP 800.53 Revision 4 Appendix D defines three control-baselines; low, moderate, high corresponding to the assessed data sensitivity level of an information system. Selection of NIST controls for a given information system within non-DoD government organizations generally falls to agency specific security requirement determinations. NIST also defines the concept of creating overlays, which are purpose driven control sets, the privacy overlay(s) [http://www.dla.mil/Portals/104/Documents/GeneralCounsel/FOIA/FOIA_PrivacyOverlay_150420.pdf] being one of the most commonly mentioned, which are generally consistent across organizations and may overlap or be additive to an organizations own security requirements.

The White House established a cloud first policy, across government agencies, in 2011 [https://www.whitehouse.gov/sites/default/files/omb/assets/egov_docs/federal-cloud-computing-strategy.pdf] that began a new imitative to evaluate commercial cloud service offerings (CSO) provided by cloud service providers (CSP) before the use of government owned hosting solutions. This policy, established in large part as a measure to address the huge IT budge across the government, recognized that industry was far more agile and had far more resources than the government to produce innovative and cost effective solutions. In implementing this policy, agencies began assessing CSP CSOs to ensure that they met FISMA requirements. The result of these assessments were authorizations to operate (ATO) for systems, leveraging the same underlying cloud infrastructure, that began to overlap with wasteful duplication of work across agencies. The Federal Risk and Assessment Management Program (FedRAMP) was the solution, creating a common assessment program and a set of three control baselines, low, moderate and high, based on NIST SP 800.53 controls, for CSP CSOs such that Provisional ATOs (P-ATO), attesting a CSPs contribution to a systems control coverage, could be shared and trusted across agencies. The term provisional is used in that they are components of a full systems ATO not a full system in themselves. It follows that FISMA applies to all government systems and FedRAMP is a specific program for CSPs to implement and assess compliance with FISMA requirements for their CSOs covering those aspects within the boundaries of a CSPs purview. This enables their customers to reuse, "inherit", the compliance assessments, already completed for CSP CSOs, reducing the overall workload and cost of implementing security for government IT systems.

Each of the FedRAMP control baselines, represent a tiered compliance level aligned to NIST SP 800.60 volume 2 data categorizations for data sensitivity, with an escalating number of applicable NIST SP 800.53 controls [https://www.fedramp.gov/resources/templates-2016/]. FedRAMP authorizations come in two forms, agency specific ATOs and those granted by the FedRAMP joint assessment board (JAB), which itself is a joint venture between government agency chief information officer (CIO)s. An agency ATO is one in which a specific agency has assessed the compliance of a CSO against their specific security requirements and granted an ATO to the specific CSO which can then be leveraged as a starting point for the next agency that comes along and uses that CSO. The security requirements, and hence implemented controls, used may or may not meet those of the next organization and therefore may not be reusable. A JAB P-ATO on the other hand requires compliance with a common control baseline and is the most stringent path that a CSP can take to FedRAMP compliance. In this path, a CSP prepares a documentation package covering their compliance implementation to meet a specific FedRAMP control baseline. A 3rd Party Assessment Organization (3PAO), accredited by the FedRAMP program management office (PMO), must concur with the CSPs independent control assessment. Each of the JAB members then further reviews it and must concur before granting a P-ATO. FISMA requirements continue to apply to the systems implemented on top of FedRAMP authorized CSP CSOs and the system owner is responsible for any deltas in control compliance beyond those covered under the CSP CSO authorization.

For many years, the DoD defined their own compliance program, called the Defense Information Assurance Certification and Assessment Process (DIACAP), described in DoD Instruction (DoDI) 8510.01, with their own security controls. Reissuance of this program in 2015 saw a rename, RMF for DoD IT, “RMF”, and a shift to NIST SP 800.37 processes as well as NIST 800.53 security controls. This established a new path to security implementation across DoD systems as a whole but had a particular impact on the DoD's ability to leverage P-ATO's granted under the FedRAMP program. Older established information systems have the leeway of a grace period to convert from DIACAP to RMF however; all DoD cloud systems are required to implement RMF from the start. The DoD is by far the largest of government organizations and is a huge target for attackers with a treasure trove of information spread across sprawling and sometimes ancient IT systems. Because of the DoD’s huge attack surface and the sensitivity of its mission, the DoD requires more stringent security controls and process than those imposed on other government agencies. Fast forwarding through several years of complex work to change an organization as large as the DoD and with significant support of the DoD CIO, the Defense Information Systems Agency (DISA) was given the task of defining and documenting the gaps between the FedRAMP baselines and DoD requirements. DISA delivered a document in January of 2015 called the Cloud Computing (CC) Security Requirements Guide (SRG).

The CC SRG, also branded as FedRAMP+, inherited some terminology of earlier documentation attempts, called Impact Levels (IL), which have evolved to align to FedRAMP’s baseline levels. The CC SRG control requirements are specifically based on FedRAMP moderate baseline controls and a CSP must meet the moderate baseline control set for a DoD authorization. IL 2, there is no IL 1, aligns with the FedRAMP moderate baseline and is applicable to IT systems processing or storing a maximum of publicly releasable data where the NIST C-I-A categorization of the system is low to moderate. IL 4, there is no IL 3, aligns with the FedRAMP moderate baseline and is applicable to IT systems processing CUI data where the NIST C-I-A categorization of the system is moderate. IL 4 systems may include those that process and/or store Personally Identifiable Information (PII), Personal Health Information (PHI), etc. however, may require application of additional overlay controls if the information system meets certain Privacy Act [https://www.justice.gov/opcl/privacy-act-1974] criteria. It may be possible to consider some systems where the NIST C-I-A categorization of the system is high as IL 4 if the authorization of the CSP CSO is up to the high baseline and the sensitivity of data does not cross the national security systems (NSS) threshold. IL 5 aligns with the FedRAMP high baseline and is applicable to more sensitive CUI as well as NSS where the NIST C-I-A categorization of the system is high. At present time, there are physical separation concerns that prevent IL 5 workloads from deployment on commercial cloud platforms. Finally, IL 6 is beyond FedRAMP program alignment and aligns with data categorized at the SECRET level making it generally out of scope for CC SRG documentation as such data requires physical environment isolation that does not map to public models.

The CC SRG stipulates a requirement that IL 4 and IL 5 workloads remain isolated from the internet and connect to the non-secure internet protocol router network (NIPRNet) via direct circuit or internet protocol security (IPsec) virtual private network (VPN) to a NIPRNet edge gateway. To support the IL 4 and IL 5 NIPRNet connection requirement, DISA has defined the concept of a Boundary Cloud Access Point (BCAP), “CAP” that acts as the gateway between a CSPs CSO and the NIPRNet edge. DISA has the task of providing an Enterprise CAP for DoD systems leveraging authorized cloud services. DISA delivered an initial CAP capability in late 2015 and later, a functional requirements (FR) document called the Secure Cloud Computing Architecture (SCCA) [http://iase.disa.mil/cloud_security/Pages/index.aspx] describing their desired future state. A CAP serves two primary purposes, connectivity between a CSP and the NIPRNet and protection of the DoD Information Network (DoDIN), a broad term for DoD networks, from threats originating from a CSP CSO. In practice, today's CAP is comprised of a common point, referred to as a meet-me-point where CSP and DoD infrastructure can meet, a co-location facility (co-lo), as well as the appropriate security stack to monitor and protect against threats. IL 4 and IL 5 systems may pass outbound traffic to the internet through the NIPRNet internet access points (IAP) and may accept traffic inbound from the IAP however, may require whitelist for inbound traffic.

DoD organizations may present a case for deployment of their own CAP solution aligned with FR specifications however; this requires DoD CIO approval and a compelling use case. The Navy Space and Naval Warfare Systems Command (SPAWAR) System Center Atlantic, “SSC LANT”, began their cloud exploration several years before DISA gained their cloud roles and responsibilities and before DoD, policy was ready for commercial cloud services. The commercial service integration integrated project team (IPT) began working through many of the challenges that have enabled the DoD as a whole to move toward cloud with the support of the Navy CIO at the time who later became the DoD CIO, Terry Halvorsen. As part of those efforts, the Navy established a CAP capability, which operates today in a concurrent fashion with the DISA Enterprise capability.

The authority to authorize information systems within the DoD under the RMF program resides at the CIO level however generally becomes delegated to authorizing officials (AO), aligned to organizational verticals. An AO is the final authority that signs a system ATO granting it the authority to operate given the compliance mechanisms documented in the systems security package. The structure of AO authority delegation varies significantly across the DoD with services like the Air Force having a highly distributed authority and services like the Army having a more centralized approach. This structure and civilian or military staff filling these roles change frequently due to assignment rotations. The structure of the review process for any system security package will be specific to that organization however, it follows the general model of having a review organization who reviews documentation then presents the risk to the AO for acceptance. In some cases, these organizations may conduct only documentation reviews while in others they may leverage a team of auditors to validate control compliance.

The RMF process is a circular process flow (step 1) system categorization, (step 2) security control selection, (step 3) security control implementation, (step 4) security control assessment, (step 5) system authorization and (step 6) monitoring security controls. Steps 1 and 2 in the RMF process directly align with the steps required to determine an information systems IL. Therefore, it naturally follows that starting RMF leads to determining an information system IL. The IL of a system is the key component in determining if a CSPs CSO authorization is sufficient to support an information system’s needs. It follows that; a CSP CSO authorization must meet or exceed the IL of the proposed system workload. From that point forward, the IL becomes less relevant and the focus shifts primarily to implementation and then assessment of the security controls selected after system categorization. DISA, as a broader mission responsibility, defines both SRGs for other broad technologies and security technical implementation guides (STIGS) [http://iase.disa.mil/stigs/Pages/index.aspx] for specific vendor products. STIGs provide very detailed checks for product configuration, all targeted at compliance with a higher-level security control. As implementation and assessment progress, STIG evidence, “checklists”, serve as compliance evidence for a systems eventual risk assessment. There is overlap in intent across NIST security controls and STIG checks may apply to several different security controls. For this reason, a Control Correlation Identifier (CCI) [http://iasecontent.disa.mil/stigs/zip/u_cci_list.zip], maps overlapping NIST security controls and STIGs that address a security control. It is important to note that not all STIG checks map to a CCI, not all CCIs will have mapped STIG “checklists” and even when mapped may not provide complete CCI coverage. In a large environment, you might imagine that multiple STIGs could apply to every server, “instance”, and often a STIG is applies multiple times across instances in an environment. CCIs not supported completely or in part by STIG, checklists require documentation. This documentation called a system security plan (SSP) covers an information systems CCI compliance supported by STIG checklists and system governance processes to facilitate system risk acceptance. The format of an SSP may be specific to the authorizing organization however; there is spotty coverage of DoD SSP templates in the wild. A DoD SSP will likely be different that a FedRAMP SSP given the emphasis on CCIs and therefore the FedRAMP SSP templates generally do not apply to DoD systems.

DoD information systems must also comply with requirements of the Cyber Incident Handling Program defined in DoDI 8530.01 [http://www.dtic.mil/whs/directives/corres/pdf/853001p.pdf], “cyber defense” or “C2”. The cyber defense program establishes a three-tiered reporting chain, covering threat detection and incident response, staring with US Cyber Command (USCYBERCOM) at tier one extending to mission system owners at tier three. In between, at tier two, several participants enable both communications surrounding and oversight of cyber threat monitoring. Of those participants, the Boundary Cyber Defense (BCD) and Mission Cyber Defense (MCD) roles are the most important for cloud. The BCD role is to monitor and protect the DoDIN edge, in this case the NIPRNet edge via a NIPRNet Federated Gateway (NFG), where a meet-me-point connects to the NIPRNet. The CAP provider should establish the required BCD relationship. The MCD role must be filled by a Cyber Defense Service Provider (CDSP), oft referred to as a CNDSP due to prior lexicon, who themselves must be accredited by USCYBERCOM and provides an oversight role to the tier 3 mission system owner. All mission systems must align with an accredited CDSP in order to connect to the DoDIN. Depending on a mission systems organizational alignment, they will fit most appropriately with one or another CDSP however if that CDSP is unable to provide support DISA generally acts as the provider of last resort. Alignment with a CDSP generally takes the form of a signed memorandum of agreement (MOA) or service level agreement (SLA) and requires some exchange of funds between governmental organizations. Obtaining a cloud project connection consent (CPTC) to the DISA CAP from the DISA Cloud Office requires documentation of this relationship. Since this alignment process can take some time, it is best to contact the appropriate CDSP at the very beginning of any DoD cloud project.

Amazon Web Services (AWS) specifically provides CSOs authorized in bundles along the boundaries of their regions. Each service is considered a CSO and a list of CSOs covered under AWS authorizations is provided on the AWS DoD SRG Compliance site [https://aws.amazon.com/compliance/dod/]. The US East and US West region authorization under the FedRAMP program is at the moderate control baseline and the GovCloud authorization is at the high control baseline. AWS provides an Enterprise Accelerator - Compliance for NIST-based Assurance Frameworks [https://s3.amazonaws.com/quickstart-reference/enterprise-accelerator/nistv2/latest/docs/standard-nist-architecture-on-the-aws-cloud.pdf] and a security control matrix [https://s3.amazonaws.com/quickstart-reference/enterprise-accelerator/nistv2/latest/docs/NIST-800-53-Security-Controls-Mapping.xlsx] that explains both how AWS services align to the NIST framework and the controls that AWS is responsible for maintaining partial or complete compliance with. For non-DoD government systems, region selection starts with identifying the available regions with the proper authorization level and should consider both cost and geo-alignment factors. For DoD systems, region selection is a bit more direct in that public IL 2 systems can choose from all CONUS regions and all other DoD workloads at IL 4 must use GovCloud. AWS is not able to support IL 5 workloads today for the DoD due to physical separation concerns. At the further end of the spectrum, AWS may be able to offer support for IL 6 workloads and a category not identified in the CC SRG for higher classification levels via completely isolated private service regions. GovCloud is sometimes confused as being the AWS IL 6 service capability however, that is not the case. An area hereunto uncovered is other legal obligations surrounding data, especially those subject to the International Trafficking in Arms Regulation (ITAR) or Export Administration Regulation (EAR), here forth "ITAR", both of which restricts transfer of certain military, industrial or manufacturing information internationally. The GovCloud region complies with ITAR responsibilities covering the CSOs provided through that region and follows the standard AWS shared responsibility model [https://aws.amazon.com/compliance/itar/]. AWS does not restrict customer use of GovCloud once vetted to meet requirements for access; the account owner, holder of the root account, must be a US person on US soil with a legitimate need to access the region. Customers must then implement proper controls on their infrastructure and governance to meet ITAR requirements.

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.

Using Linqpad to Query Amazon Redshift Database Clusters

Looking for a quick and easy way to query an Amazon Redshift Database Cluster? I was and the first place I turned was to my favorite tool for this kind of thing, Linqpad. I was a bit dismayed to find that none has developed, that I could find, a Linqpad database driver for Redshift. Small note, there are a few Postresql options and Redshift is supposed to be Postresql compatible however, none of them seemed to work for Redshift.

Giving credit to the author of this article describing the use of Linqpad for connections to MS Access, I made a few tweaks and boom, I have a working way to connect to and query Redshift. So in the pay it forward spirit, I thought I'd share.

2026 Driver Update: AWS is retiring the legacy 1.x ODBC driver on June 1, 2026. Ensure you have installed the Amazon Redshift ODBC Driver 2.x. Unlike previous versions, the x64 driver is now the standard requirement for modern versions of LINQPad.
// PREREQUISITES:
// (1) Copy and paste this entire block of code into a Linqpad query window, 
//     no connection needed, and change language to C# Statement(s).
// (2) To use the .NET ODBC assembly, you'll have to press F4 then click on the 
//     "Additional Namespace Imports" tab. Add "System.Data.Odbc".
// (3) Install the Amazon Redshift ODBC Driver 2.x (x64). 
// (4) Update the query settings below.

// ***************** Update Settings Below *****************
string endpoint = " <endpoint> ";
string database = " <database_name> ";
string user     = " <username> ";
string pass     = " <password> ";
string port     = "5439"; // Default Redshift port

string table = "";
string query = "SELECT * FROM " + table; 
// ***************** End Update Settings *******************

string connectionString = $"Driver={{Amazon Redshift (x64)}}; Server={endpoint}; Database={database}; UID={user}; PWD={pass}; Port={port};";

using(OdbcConnection connection = new OdbcConnection(connectionString))
{
    Console.WriteLine($"Connecting to [{endpoint}]...");
    try
    {
        if (query.StartsWith("SELECT", StringComparison.OrdinalIgnoreCase))
        {
            using (OdbcDataAdapter adapter = new OdbcDataAdapter(query, connection))
            {   
                DataSet data = new DataSet();
                adapter.Fill(data, table);
                Console.WriteLine($"Found [{data.Tables[0].Rows.Count}] rows");
                data.Dump();
            }
        }
        else
        {
            connection.Open();
            using (OdbcCommand command = new OdbcCommand(query, connection))
            {
                var impactedRows = command.ExecuteNonQuery();
                Console.WriteLine($"[{impactedRows}] rows impacted");
            }
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.ToString());
    }
}

Author’s Note: This article reflects my personal professional experience and opinions. While my insights are informed by my professional history, these views are my own and do not represent the official position of my former employer.

About the Author: Jacob Marks is an engineering leader with over 20 years of experience, including a decade at Amazon Web Services (AWS) where he led teams in EC2 Core Platform and the development of the AWS Payment Cryptography service.

Labels

.NET .NET 10 .NET 3.5 Active Directory AD DS Adoption AI AI Ethics AI Hype Alerts Amazon Cognito Amazon DLM Amazon Q Anthropic AppDomain Architecture Artificial Intelligence Asia Pacific Sydney ASP.net ASPxGridView Audit Readiness Auto Recovery Automation AWS AWS Certified AWS Lambda AWS Payment Cryptography AWS SDK AWS Security Specialty Azure Azure DevOps Server Backup BIG-IP C# Career Growth Cartes Bancaires CB Certificate Bundle Certification Claude Cloud Cloud Certification Cloud Hosting Cloud Security CloudWatch CLR Content Query Cost Optimization Credentials Database Defense Industry Deloitte Developer Tools Developers DevEx DevExpress DevOps DISA Disk Space DISM Distributed Systems DoD DoD CC SRG EBS EC2 Engineering Engineering Leadership Engineering Management EnPasFltV2 Enterprise Event Receiver Exam F5 Federal IT FedRAMP Fintech FISMA GAC Generative AI GitHub gMSA GovCloud Government Compliance GridView Hardware Security Modules HSM IAM Identity Management IIS Infra Infrastructure as Code IT Tools Jacob Marks JavaScript jQuery Lambda Leadership Linqpad LLM lsass.exe LTM Memory Optimization Mentorship Microsoft Migration Multi-Region Keys NACL Native AOT Network Architecture Networking NIST ODBC Open Source Payment Cryptography Payments PCI Compliance Performance Platform Platform Architecture Power Tools PowerShell Python re:Invent Reachability Analyzer Redshift Relationships List Replace Root Volume SAA-C00 SAP-C00 Security Security Group Serverless SES SharePoint SharePoint 2010 Site Reliability SMTP Snapshot Software Engineering Solutions Architect Solutions Architect Professional SP 2007 SPAWAR SSL STIG Storage Strategy Sydney SysAdmin Team Foundation Server Team Utilities Tech Industry Technical Depth Technology TFS Tools Troubleshooting Upgrade Visual Studio VPC VPC Flow Logs Web Development WebPart WinDirStat Windows Server Windows Server 2025 WinForms