Good Engineering Managers Don’t Leave the Technology Behind

As a manager of software teams, I’ve been asked many times about the transition from building software to managing the people who build it. It’s an existential question for a lot of engineers as they think about career growth: Do I become a manager, or do I stay technical?

That framing has always bothered me, because it assumes something I don’t believe to be true: that managing and continuing to develop technical depth are mutually exclusive.

It’s certainly possible to manage a software project with only surface-level technical knowledge. Plenty of organizations do exactly that. But in my experience, truly understanding the systems your engineers are working on is invaluable, both to the team and to the outcomes they deliver.

What Changes When You Become a Manager

The real shift isn’t from technical to non-technical. It’s a shift in how and where your technical skills are applied.

As an engineer, your leverage comes from the code you write. As a manager, your leverage comes from the decisions you enable, the risks you surface early, and the technical tradeoffs you help your team navigate.

That requires more than vocabulary-level familiarity. It requires enough depth to ask the right questions, recognize when complexity is creeping in, and understand when a problem is architectural versus organizational.

Why Technical Depth Still Matters

Teams know very quickly whether their manager actually understands the work. Not at the level of “could I implement this myself,” but at the level of “do you understand why this is hard, risky, or worth doing.”

Technical depth enables better judgment calls:

  • When a deadline is unrealistic versus merely uncomfortable.
  • When an incident is a one-off versus a systemic design issue.
  • When adding people will help, and when it will slow everything down.

It also builds trust. Engineers are far more willing to accept hard tradeoffs when they believe those decisions are informed by real understanding rather than abstraction.

The False Choice

The idea that you must choose between management and technical growth is mostly a product of how organizations structure careers, not a law of nature.

Some managers stop developing technically because their role no longer demands it. Others continue learning because it makes them better at prioritization, architecture discussions, and long-term planning.

I’ve always believed that the best engineering leaders stay close enough to the technology to understand its constraints, even if they are no longer the primary authors of the code.

Where I Landed

For me, management was not a departure from engineering. It was an expansion of scope.

The tools changed. The feedback loops got longer. But the core skill, understanding complex systems and helping them work better, remained the same.

If you’re an engineer facing this decision, my advice is simple: don’t assume you’re choosing between people and technology. The best managers I’ve worked with never did.


NOTICE: The thoughts and statements in this article are mine alone and do not represent those of any past or present employer. This content reflects personal experience and opinion.

VPC Flow Logs: Use Them Intentionally

Note: I originally sketched this post years ago and never finished it. I’m publishing it now as a retrospective on how I think about VPC Flow Logs at scale.

VPC Flow Logs: Use Them Intentionally

For a long time, the default guidance in AWS environments was simple: enable VPC Flow Logs everywhere. At small to moderate scale, that advice is usually fine. At large scale, it becomes expensive, noisy, and often redundant.

There’s an inherent catch-22 with Flow Logs. If you don’t have them enabled, you miss historical data when you need it. If you enable them universally, you can generate massive volumes of duplicated traffic data that few teams ever analyze in a meaningful way.

At sufficient scale, AWS can perform network-level analysis across its infrastructure independent of whether an individual account is exporting Flow Logs. Because of that, Flow Logs are not always treated internally as a hard security requirement for every workload. I argued for that shift myself, mainly because the cost and operational overhead often outweighed the marginal benefit.

None of that makes Flow Logs pointless. It means they should be used deliberately.

Analysis vs. Compliance

I think of Flow Logs primarily as an analysis tool. If you have a compliance obligation that requires network traffic retention, or you have automated tooling that continuously analyzes Flow Logs for anomalies, they’re absolutely worth enabling.

If you don’t, you’re often collecting data “just in case,” with no realistic plan to review it. In that scenario, Flow Logs tend to become expensive cold storage rather than a security control.

Practical Guidance

  • If you’re unsure, enable them. When you’re actively troubleshooting or operating a high-risk environment, having the data is better than wishing you did.
  • Always manage retention. Don’t enable Flow Logs without S3 lifecycle policies. Expire or transition data aggressively. Thirty to ninety days is enough for most investigations.
  • Be honest about usage. If no one is looking at the data and no system is analyzing it, you’re paying for peace of mind, not security.

Used intentionally, VPC Flow Logs are valuable. Enabled blindly, they’re just another growing bucket of logs no one reads.


NOTICE: The thoughts and statements in this article are mine alone and do not represent those of Amazon or Amazon Web Services. AWS service names are trademarks of their respective owners. This content is provided for informational purposes only and may contain errors or omissions. I disclaim liability for any loss, damage, or disruption arising from its use.

Amazon Web Services (AWS) Solutions Architect Professional Exam

Note: I originally wrote this post in 2017 but never published it at the time. Please note that the AWS certification landscape has changed significantly since then; this is provided strictly as a historical reference.

Another year another re:Invent. This year at re:Invent 2017, because my associate was up for renewal again, I decided to sit for the AWS Solutions Architect Professional Exam. The exam is in beta phase where questions are being tested, refined and the exam pass line is being set. I won't find out if I passed until March 2017 and I can't share actual exam questions but I can share advice for others that are interested in the exam in the future. Note that as of Jan 2017 the beta is currently closed as it's proved to be very popular.

Preparation:

I entered the exam cold, drawing only on my working knowledge of AWS and its services so my perspective should be an unbiased view of the exam. There is an exam blueprint but it's been pulled from the AWS website. 

Format:

  • ~3hr Exam Time
  • > 100 Questions
  • Reading Comprehension Questions
  • Question Nuances Where Important
  • Heavy Focus on Services and Service Components with Security Relationship:
    • IAM
    • WAF
    • CloudFront
    • ACM
    • Security Groups
    • NACLs
    • VPC
    • etc.

My Exam Perspective:

I found the questions to be very long, requiring significant reading and reading comprehension in order to answer questions. I also found the possible answers to be long and requiring reading comprehension. I had to read a number of questions at least twice to pickup on all of their nuances and be able to differentiate answer validity. The questions for the exam had some substantial parallels to security related questions on other exams.


NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web Services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions.

Amazon Web Services (AWS) Certified Security Specialty (CSS) Beta Exam

*** NOTE: AWS has pulled this specific certification version, refunding those who took the original beta exam. ***

2026 Status Update: The AWS Certified Security - Specialty is now a mature, standard certification. While the beta period mentioned below ended years ago, the core focus on deep-dive security across IAM, Encryption, and Incident Response remains the primary objective of the current exam version.

I had the opportunity to take the AWS Certified Security Specialty Exam at re:Invent 2016. The exam was in a beta phase where questions were being tested, refined, and the exam pass line was being set. While I can't share actual exam questions, I can share advice for others interested in the certification path.

Preparation

I entered the exam cold, drawing only on my working knowledge of AWS and its services, so my perspective is an unbiased view of the exam's difficulty. While blueprints change, the foundational security pillars remain consistent.

Format

  • Duration: ~3hr Exam Time
  • Volume: > 100 Questions (Beta format)
  • Style: Heavy focus on reading comprehension and identifying technical nuances.
  • Service Focus: High concentration on services with direct security relationships:
    • Identity & Access Management (IAM)
    • AWS WAF & Shield
    • CloudFront & ACM (Certificate Manager)
    • Security Groups, NACLs, and VPC Architecture

My Exam Perspective

I found the questions to be very long, requiring significant reading comprehension to answer accurately. The possible answers were also lengthy, requiring careful differentiation to identify the most valid technical solution. There were substantial parallels to security-related questions found on the Professional-level Architect exams.


NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web Services. All referenced AWS services are the property of AWS. While I strive for accuracy, I disclaim liability for any disruption caused by errors or omissions.

Amazon Cognito User Pool Admin Authentication Flow with AWS SDK For .NET

Implementing the Amazon Cognito User Pool Admin Authentication Flow with AWS SDK For .NET offers a path to implement user authentication without management of a host components otherwise needed to signup, verify, store and authenticate a user. Though Cognito is largely framed as a mobile service, it is well suited to support web applications. In order to implement this process you would use the Admin Auth Flow outlined in the AWS produced slide below. This example assumes that you have already configured both a Cognito User Pool w/ an App, ensuring the "Enable sign-in API for server-based authentication (ADMIN_NO_SRP_AUTH)" is checked for that app on the App tab and that no App client secret is defined for that App. App client secrets are not supported in the .NET SDK. It is also assumed that a Federated Identity Pool is configured to point to the before mentioned User Pool.

This auth flow bypasses the use of Secure Remote Password (SRP) protocol protections heavily used by AWS to prevent passwords from even been sent over the wire. As a result, when used in a client server web application, your users passwords would be transmitted to the server and that communication must be encrypted with strong encryption to prevent compromise of user credentials. The below code implements a CognitoAdminAuthenticationProvider with Authenticate and GetCredentials members. The Authenticate method returns a wrapped ChallengeNameType and AuthenticationResultType set of responses. A challenge will only be returned if additional details are needed for authentication, in which case you would simply ensure those details are included in the UserCredentials provided to the authenticate method and call Authenticate again. Once authenticated, a AuthenticationResultType will be included in the result and can be used to call the GetCredentials method and obtain temporary AWS Credentials.

Usage of the above code would look something like the below. This example uses the temporary credentials to call S3 ListBuckets.

As an additional note, the options for the CognitoAWSCredentials Logins dictionary are listed below. This example uses the last listed value.

Logins: {
   'graph.facebook.com': '[FBTOKEN]',
   'www.amazon.com': '[AMAZONTOKEN]',
   'accounts.google.com': '[GOOGLETOKEN]',
   'api.twitter.com': '[TWITTERTOKEN]',
   'www.digits.com': '[DIGITSTOKEN]',
   'cognito-idp.[region].amazonaws.com/[your_user_pool_id]': '[id token]'
}

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions.

Getting Started with AWS Lambda C# Functions

For those of us that are .NET developers at heart, we have powerful tools for running serverless C# applications on AWS. AWS Lambda now officially supports .NET 10 as a managed runtime, providing long-term support (LTS) through November 2028.

Modern C# support in Lambda has evolved beyond early .NET Core. Developers can now utilize C# File-Based apps, which eliminate much of the traditional boilerplate code. These functions typically publish as Native AOT (Ahead-of-Time) by default, offering up to an 86% improvement in cold start times by removing the need for JIT compilation at runtime.

Prerequisites:

  1. Development Environment: Visual Studio 2022 (latest version) with the .NET 10 SDK installed.
  2. AWS Toolkit for Visual Studio: Install the latest extension from the Visual Studio Marketplace. It now includes Amazon Q Developer for AI-assisted coding and one-click publishing.

Getting Started with the .NET CLI:

The fastest way to scaffold a new function is using the Amazon Lambda Templates. You can install and create a file-based function with these commands:


dotnet new install Amazon.Lambda.Templates
dotnet new lambda.FileBased -n MyLambdaFunction

Key Project References:

  • Amazon.Lambda.Core: The foundational library for Lambda functions.
  • Amazon.Lambda.RuntimeSupport: Required for file-based apps and Native AOT.
  • Amazon.Lambda.Serialization.SystemTextJson: High-performance JSON serialization using source generators.

Using the AWS Toolkit, you can right-click your project and select "Publish to AWS Lambda" to deploy instantly. The toolkit handles the complexity of Native AOT container builds automatically if Docker is installed on your machine.


NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web Services. All referenced AWS services are the property of AWS. This information is current as of early 2026.

FISMA, FedRAMP and the DoD CC SRG: A Review of the US Government Cloud Security Policy Landscape

Note: The US Government cloud security policy landscape has changed significantly since this was originally written. I no longer have the first-hand, current context required to provide a meaningful update to this information, so please treat the following as a historical reference.

The Federal Information System Management Act (FISMA), a US Law signed in 2002, defines the information protection requirements for US Government, "government", data and is applicable to all information systems that process any government data regardless of ownership or control of such systems. Systems Integrators (SI) under contract to perform work for the government are almost always provided some government furnished information (GFI) or government furnished equipment(GFE) and FISMA requirements extend to the systems owned and/or operated by these SIs if they store or process government data. Government data always remains under the ownership of the source agency with that agency holding sole responsibility for determining the data's sensitivity level. It is usually a contractual requirement for an SI, charged with management of government data, to ensure FISMA compliance and an SI is obligated to destroy or return all GFI and GFE at the end of contractual period of performance. Government data falls into a number of information sensitivity categories ranging from public information to the highest of classification and the compliance requirements imposed by FISMA increase in lockstep with that sensitivity.

A large portion of government data under the management or control of most SI's will fall in the public or controlled unclassified information (CUI) buckets. Public data is rather straightforward in that it is publicly releasable and if compromised would have little to no impact on the public image, trust, security or mission of the owning government agency and/or its personnel and as such, requires the least compliance overhead. CUI on the other hand is significantly more complex and nuanced. CUI data could compromise the public image, trust, security or mission of the owning government agency and/or its personnel. As such, CUI data has some restriction applied to its distribution [https://www.archives.gov/cui/registry/category-list.html]. With Department of Defense (DoD) data, there are additional types of distribution restrictions defined in DoD Directive (DoDD) 5200.01 v4 [http://www.dtic.mil/whs/directives/corres/pdf/520001_vol4.pdf] and a host of marking requirements [http://www.dtic.mil/whs/directives/corres/pdf/520001_vol2.pdf]. A common misunderstanding of CUI requirements is that, due to its unclassified nature, it does not require significant security consideration. This misunderstanding is something to be cognoscente of in any engagement with government agency or SI relationship and it is advisable to inquire about CUI data restrictions as this area comes with certain legal as well as contractual ramifications.

Data sensitivity is a multifaceted factor that the National Institute of Science and Technology (NIST) breaks down to three areas; Confidentiality, Integrity and Availability “C-I-A category” resented in the format {x,x,x} where “x” will be; low, moderate or high. The highest denominator of these three categories determines the sensitivity and therefore compliance requirements of an information system. Determining the data sensitivity of an information system is a process defined in NIST special publication (SP) 800.60 volume 1 [http://csrc.nist.gov/publications/nistpubs/800-60-rev1/SP800-60_Vol1-Rev1.pdf]. This process starts with determining the types of data processed and/or stored by an information system. This is a critical step to ensure accurate compliance implementation. This will enable the selection of data type categories, defined in NIST SP 800.60 volume 2 [http://csrc.nist.gov/publications/nistpubs/800-60-rev1/SP800-60_Vol2-Rev1.pdf]. For each data category applicable to an information system NIST 800.60 volume 2 provides a baseline C-I-A category assessment as well as a number of caveats that could dictate a higher or lower categorization assessment for each of the C-I-A categories. An information owner can choose to adjust these assessments based on operational factors however; deviation from a C-I-A category baseline will require justification. This will result in a list of applicable data type categories and assessed C-I-A categorization for each. The highest categorization across the three C-I-A categories for all data types becomes the baseline level for the information system. The output of this process is generally a document describing the applicable data type categories and the assessed C-I-A categorization for each, with required justifications. This document generally requires review and signature by the system owner and an organizations authorization authority. Any change in data processed or stored by an information system should trigger a new iteration of this and all subsequent processes.

Compliance requirements come in the form of auditable states for various aspects of an information systems infrastructure, architectural design, implementation and the policies and practices, “governance”, established surrounding that systems management. There are generally two different types of controls; security controls and specific vendor product or process controls, "implementation controls". Security controls are high level and cover a broad requirement for an information system that often involve a number of physical implementation aspects and or process documentation components to meet. Implementation controls are often very specific requiring verification of state across multiple components to role-up to security control compliance. The NIST SP 800 series documents [http://csrc.nist.gov/publications/PubsSPs.html#SP 800], stemming from the need to define guidelines for compliance with FISMA and other laws, are the basis for government agency compliance programs. These programs draw from the security controls defined in NIST SP 800.53 [http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf] Appendix D. Organizations across the government are responsible for implementation of their own security and compliance programs as required by FISMA. As a result, processes vary across agencies, though most are implementations of NIST SP 800.37 [http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r1.pdf] that describes a process called the Risk Management Framework (RMF). RMF is a risk-based approach to addressing information technology (IT) security with emphasis placed on control compliance priority and assessing the overall risk posed by a system. NIST SP 800.53 Revision 4 Appendix D defines three control-baselines; low, moderate, high corresponding to the assessed data sensitivity level of an information system. Selection of NIST controls for a given information system within non-DoD government organizations generally falls to agency specific security requirement determinations. NIST also defines the concept of creating overlays, which are purpose driven control sets, the privacy overlay(s) [http://www.dla.mil/Portals/104/Documents/GeneralCounsel/FOIA/FOIA_PrivacyOverlay_150420.pdf] being one of the most commonly mentioned, which are generally consistent across organizations and may overlap or be additive to an organizations own security requirements.

The White House established a cloud first policy, across government agencies, in 2011 [https://www.whitehouse.gov/sites/default/files/omb/assets/egov_docs/federal-cloud-computing-strategy.pdf] that began a new imitative to evaluate commercial cloud service offerings (CSO) provided by cloud service providers (CSP) before the use of government owned hosting solutions. This policy, established in large part as a measure to address the huge IT budge across the government, recognized that industry was far more agile and had far more resources than the government to produce innovative and cost effective solutions. In implementing this policy, agencies began assessing CSP CSOs to ensure that they met FISMA requirements. The result of these assessments were authorizations to operate (ATO) for systems, leveraging the same underlying cloud infrastructure, that began to overlap with wasteful duplication of work across agencies. The Federal Risk and Assessment Management Program (FedRAMP) was the solution, creating a common assessment program and a set of three control baselines, low, moderate and high, based on NIST SP 800.53 controls, for CSP CSOs such that Provisional ATOs (P-ATO), attesting a CSPs contribution to a systems control coverage, could be shared and trusted across agencies. The term provisional is used in that they are components of a full systems ATO not a full system in themselves. It follows that FISMA applies to all government systems and FedRAMP is a specific program for CSPs to implement and assess compliance with FISMA requirements for their CSOs covering those aspects within the boundaries of a CSPs purview. This enables their customers to reuse, "inherit", the compliance assessments, already completed for CSP CSOs, reducing the overall workload and cost of implementing security for government IT systems.

Each of the FedRAMP control baselines, represent a tiered compliance level aligned to NIST SP 800.60 volume 2 data categorizations for data sensitivity, with an escalating number of applicable NIST SP 800.53 controls [https://www.fedramp.gov/resources/templates-2016/]. FedRAMP authorizations come in two forms, agency specific ATOs and those granted by the FedRAMP joint assessment board (JAB), which itself is a joint venture between government agency chief information officer (CIO)s. An agency ATO is one in which a specific agency has assessed the compliance of a CSO against their specific security requirements and granted an ATO to the specific CSO which can then be leveraged as a starting point for the next agency that comes along and uses that CSO. The security requirements, and hence implemented controls, used may or may not meet those of the next organization and therefore may not be reusable. A JAB P-ATO on the other hand requires compliance with a common control baseline and is the most stringent path that a CSP can take to FedRAMP compliance. In this path, a CSP prepares a documentation package covering their compliance implementation to meet a specific FedRAMP control baseline. A 3rd Party Assessment Organization (3PAO), accredited by the FedRAMP program management office (PMO), must concur with the CSPs independent control assessment. Each of the JAB members then further reviews it and must concur before granting a P-ATO. FISMA requirements continue to apply to the systems implemented on top of FedRAMP authorized CSP CSOs and the system owner is responsible for any deltas in control compliance beyond those covered under the CSP CSO authorization.

For many years, the DoD defined their own compliance program, called the Defense Information Assurance Certification and Assessment Process (DIACAP), described in DoD Instruction (DoDI) 8510.01, with their own security controls. Reissuance of this program in 2015 saw a rename, RMF for DoD IT, “RMF”, and a shift to NIST SP 800.37 processes as well as NIST 800.53 security controls. This established a new path to security implementation across DoD systems as a whole but had a particular impact on the DoD's ability to leverage P-ATO's granted under the FedRAMP program. Older established information systems have the leeway of a grace period to convert from DIACAP to RMF however; all DoD cloud systems are required to implement RMF from the start. The DoD is by far the largest of government organizations and is a huge target for attackers with a treasure trove of information spread across sprawling and sometimes ancient IT systems. Because of the DoD’s huge attack surface and the sensitivity of its mission, the DoD requires more stringent security controls and process than those imposed on other government agencies. Fast forwarding through several years of complex work to change an organization as large as the DoD and with significant support of the DoD CIO, the Defense Information Systems Agency (DISA) was given the task of defining and documenting the gaps between the FedRAMP baselines and DoD requirements. DISA delivered a document in January of 2015 called the Cloud Computing (CC) Security Requirements Guide (SRG).

The CC SRG, also branded as FedRAMP+, inherited some terminology of earlier documentation attempts, called Impact Levels (IL), which have evolved to align to FedRAMP’s baseline levels. The CC SRG control requirements are specifically based on FedRAMP moderate baseline controls and a CSP must meet the moderate baseline control set for a DoD authorization. IL 2, there is no IL 1, aligns with the FedRAMP moderate baseline and is applicable to IT systems processing or storing a maximum of publicly releasable data where the NIST C-I-A categorization of the system is low to moderate. IL 4, there is no IL 3, aligns with the FedRAMP moderate baseline and is applicable to IT systems processing CUI data where the NIST C-I-A categorization of the system is moderate. IL 4 systems may include those that process and/or store Personally Identifiable Information (PII), Personal Health Information (PHI), etc. however, may require application of additional overlay controls if the information system meets certain Privacy Act [https://www.justice.gov/opcl/privacy-act-1974] criteria. It may be possible to consider some systems where the NIST C-I-A categorization of the system is high as IL 4 if the authorization of the CSP CSO is up to the high baseline and the sensitivity of data does not cross the national security systems (NSS) threshold. IL 5 aligns with the FedRAMP high baseline and is applicable to more sensitive CUI as well as NSS where the NIST C-I-A categorization of the system is high. At present time, there are physical separation concerns that prevent IL 5 workloads from deployment on commercial cloud platforms. Finally, IL 6 is beyond FedRAMP program alignment and aligns with data categorized at the SECRET level making it generally out of scope for CC SRG documentation as such data requires physical environment isolation that does not map to public models.

The CC SRG stipulates a requirement that IL 4 and IL 5 workloads remain isolated from the internet and connect to the non-secure internet protocol router network (NIPRNet) via direct circuit or internet protocol security (IPsec) virtual private network (VPN) to a NIPRNet edge gateway. To support the IL 4 and IL 5 NIPRNet connection requirement, DISA has defined the concept of a Boundary Cloud Access Point (BCAP), “CAP” that acts as the gateway between a CSPs CSO and the NIPRNet edge. DISA has the task of providing an Enterprise CAP for DoD systems leveraging authorized cloud services. DISA delivered an initial CAP capability in late 2015 and later, a functional requirements (FR) document called the Secure Cloud Computing Architecture (SCCA) [http://iase.disa.mil/cloud_security/Pages/index.aspx] describing their desired future state. A CAP serves two primary purposes, connectivity between a CSP and the NIPRNet and protection of the DoD Information Network (DoDIN), a broad term for DoD networks, from threats originating from a CSP CSO. In practice, today's CAP is comprised of a common point, referred to as a meet-me-point where CSP and DoD infrastructure can meet, a co-location facility (co-lo), as well as the appropriate security stack to monitor and protect against threats. IL 4 and IL 5 systems may pass outbound traffic to the internet through the NIPRNet internet access points (IAP) and may accept traffic inbound from the IAP however, may require whitelist for inbound traffic.

DoD organizations may present a case for deployment of their own CAP solution aligned with FR specifications however; this requires DoD CIO approval and a compelling use case. The Navy Space and Naval Warfare Systems Command (SPAWAR) System Center Atlantic, “SSC LANT”, began their cloud exploration several years before DISA gained their cloud roles and responsibilities and before DoD, policy was ready for commercial cloud services. The commercial service integration integrated project team (IPT) began working through many of the challenges that have enabled the DoD as a whole to move toward cloud with the support of the Navy CIO at the time who later became the DoD CIO, Terry Halvorsen. As part of those efforts, the Navy established a CAP capability, which operates today in a concurrent fashion with the DISA Enterprise capability.

The authority to authorize information systems within the DoD under the RMF program resides at the CIO level however generally becomes delegated to authorizing officials (AO), aligned to organizational verticals. An AO is the final authority that signs a system ATO granting it the authority to operate given the compliance mechanisms documented in the systems security package. The structure of AO authority delegation varies significantly across the DoD with services like the Air Force having a highly distributed authority and services like the Army having a more centralized approach. This structure and civilian or military staff filling these roles change frequently due to assignment rotations. The structure of the review process for any system security package will be specific to that organization however, it follows the general model of having a review organization who reviews documentation then presents the risk to the AO for acceptance. In some cases, these organizations may conduct only documentation reviews while in others they may leverage a team of auditors to validate control compliance.

The RMF process is a circular process flow (step 1) system categorization, (step 2) security control selection, (step 3) security control implementation, (step 4) security control assessment, (step 5) system authorization and (step 6) monitoring security controls. Steps 1 and 2 in the RMF process directly align with the steps required to determine an information systems IL. Therefore, it naturally follows that starting RMF leads to determining an information system IL. The IL of a system is the key component in determining if a CSPs CSO authorization is sufficient to support an information system’s needs. It follows that; a CSP CSO authorization must meet or exceed the IL of the proposed system workload. From that point forward, the IL becomes less relevant and the focus shifts primarily to implementation and then assessment of the security controls selected after system categorization. DISA, as a broader mission responsibility, defines both SRGs for other broad technologies and security technical implementation guides (STIGS) [http://iase.disa.mil/stigs/Pages/index.aspx] for specific vendor products. STIGs provide very detailed checks for product configuration, all targeted at compliance with a higher-level security control. As implementation and assessment progress, STIG evidence, “checklists”, serve as compliance evidence for a systems eventual risk assessment. There is overlap in intent across NIST security controls and STIG checks may apply to several different security controls. For this reason, a Control Correlation Identifier (CCI) [http://iasecontent.disa.mil/stigs/zip/u_cci_list.zip], maps overlapping NIST security controls and STIGs that address a security control. It is important to note that not all STIG checks map to a CCI, not all CCIs will have mapped STIG “checklists” and even when mapped may not provide complete CCI coverage. In a large environment, you might imagine that multiple STIGs could apply to every server, “instance”, and often a STIG is applies multiple times across instances in an environment. CCIs not supported completely or in part by STIG, checklists require documentation. This documentation called a system security plan (SSP) covers an information systems CCI compliance supported by STIG checklists and system governance processes to facilitate system risk acceptance. The format of an SSP may be specific to the authorizing organization however; there is spotty coverage of DoD SSP templates in the wild. A DoD SSP will likely be different that a FedRAMP SSP given the emphasis on CCIs and therefore the FedRAMP SSP templates generally do not apply to DoD systems.

DoD information systems must also comply with requirements of the Cyber Incident Handling Program defined in DoDI 8530.01 [http://www.dtic.mil/whs/directives/corres/pdf/853001p.pdf], “cyber defense” or “C2”. The cyber defense program establishes a three-tiered reporting chain, covering threat detection and incident response, staring with US Cyber Command (USCYBERCOM) at tier one extending to mission system owners at tier three. In between, at tier two, several participants enable both communications surrounding and oversight of cyber threat monitoring. Of those participants, the Boundary Cyber Defense (BCD) and Mission Cyber Defense (MCD) roles are the most important for cloud. The BCD role is to monitor and protect the DoDIN edge, in this case the NIPRNet edge via a NIPRNet Federated Gateway (NFG), where a meet-me-point connects to the NIPRNet. The CAP provider should establish the required BCD relationship. The MCD role must be filled by a Cyber Defense Service Provider (CDSP), oft referred to as a CNDSP due to prior lexicon, who themselves must be accredited by USCYBERCOM and provides an oversight role to the tier 3 mission system owner. All mission systems must align with an accredited CDSP in order to connect to the DoDIN. Depending on a mission systems organizational alignment, they will fit most appropriately with one or another CDSP however if that CDSP is unable to provide support DISA generally acts as the provider of last resort. Alignment with a CDSP generally takes the form of a signed memorandum of agreement (MOA) or service level agreement (SLA) and requires some exchange of funds between governmental organizations. Obtaining a cloud project connection consent (CPTC) to the DISA CAP from the DISA Cloud Office requires documentation of this relationship. Since this alignment process can take some time, it is best to contact the appropriate CDSP at the very beginning of any DoD cloud project.

Amazon Web Services (AWS) specifically provides CSOs authorized in bundles along the boundaries of their regions. Each service is considered a CSO and a list of CSOs covered under AWS authorizations is provided on the AWS DoD SRG Compliance site [https://aws.amazon.com/compliance/dod/]. The US East and US West region authorization under the FedRAMP program is at the moderate control baseline and the GovCloud authorization is at the high control baseline. AWS provides an Enterprise Accelerator - Compliance for NIST-based Assurance Frameworks [https://s3.amazonaws.com/quickstart-reference/enterprise-accelerator/nistv2/latest/docs/standard-nist-architecture-on-the-aws-cloud.pdf] and a security control matrix [https://s3.amazonaws.com/quickstart-reference/enterprise-accelerator/nistv2/latest/docs/NIST-800-53-Security-Controls-Mapping.xlsx] that explains both how AWS services align to the NIST framework and the controls that AWS is responsible for maintaining partial or complete compliance with. For non-DoD government systems, region selection starts with identifying the available regions with the proper authorization level and should consider both cost and geo-alignment factors. For DoD systems, region selection is a bit more direct in that public IL 2 systems can choose from all CONUS regions and all other DoD workloads at IL 4 must use GovCloud. AWS is not able to support IL 5 workloads today for the DoD due to physical separation concerns. At the further end of the spectrum, AWS may be able to offer support for IL 6 workloads and a category not identified in the CC SRG for higher classification levels via completely isolated private service regions. GovCloud is sometimes confused as being the AWS IL 6 service capability however, that is not the case. An area hereunto uncovered is other legal obligations surrounding data, especially those subject to the International Trafficking in Arms Regulation (ITAR) or Export Administration Regulation (EAR), here forth "ITAR", both of which restricts transfer of certain military, industrial or manufacturing information internationally. The GovCloud region complies with ITAR responsibilities covering the CSOs provided through that region and follows the standard AWS shared responsibility model [https://aws.amazon.com/compliance/itar/]. AWS does not restrict customer use of GovCloud once vetted to meet requirements for access; the account owner, holder of the root account, must be a US person on US soil with a legitimate need to access the region. Customers must then implement proper controls on their infrastructure and governance to meet ITAR requirements.

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.

Using Linqpad to Query Amazon Redshift Database Clusters

Looking for a quick and easy way to query an Amazon Redshift Database Cluster? I was and the first place I turned was to my favorite tool for this kind of thing, Linqpad. I was a bit dismayed to find that none has developed, that I could find, a Linqpad database driver for Redshift. Small note, there are a few Postresql options and Redshift is supposed to be Postresql compatible however, none of them seemed to work for Redshift.

Giving credit to the author of this article describing the use of Linqpad for connections to MS Access, I made a few tweaks and boom, I have a working way to connect to and query Redshift. So in the pay it forward spirit, I thought I'd share.

2026 Driver Update: AWS is retiring the legacy 1.x ODBC driver on June 1, 2026. Ensure you have installed the Amazon Redshift ODBC Driver 2.x. Unlike previous versions, the x64 driver is now the standard requirement for modern versions of LINQPad.
// PREREQUISITES:
// (1) Copy and paste this entire block of code into a Linqpad query window, 
//     no connection needed, and change language to C# Statement(s).
// (2) To use the .NET ODBC assembly, you'll have to press F4 then click on the 
//     "Additional Namespace Imports" tab. Add "System.Data.Odbc".
// (3) Install the Amazon Redshift ODBC Driver 2.x (x64). 
// (4) Update the query settings below.

// ***************** Update Settings Below *****************
string endpoint = " <endpoint> ";
string database = " <database_name> ";
string user     = " <username> ";
string pass     = " <password> ";
string port     = "5439"; // Default Redshift port

string table = "";
string query = "SELECT * FROM " + table; 
// ***************** End Update Settings *******************

string connectionString = $"Driver={{Amazon Redshift (x64)}}; Server={endpoint}; Database={database}; UID={user}; PWD={pass}; Port={port};";

using(OdbcConnection connection = new OdbcConnection(connectionString))
{
    Console.WriteLine($"Connecting to [{endpoint}]...");
    try
    {
        if (query.StartsWith("SELECT", StringComparison.OrdinalIgnoreCase))
        {
            using (OdbcDataAdapter adapter = new OdbcDataAdapter(query, connection))
            {   
                DataSet data = new DataSet();
                adapter.Fill(data, table);
                Console.WriteLine($"Found [{data.Tables[0].Rows.Count}] rows");
                data.Dump();
            }
        }
        else
        {
            connection.Open();
            using (OdbcCommand command = new OdbcCommand(query, connection))
            {
                var impactedRows = command.ExecuteNonQuery();
                Console.WriteLine($"[{impactedRows}] rows impacted");
            }
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.ToString());
    }
}

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions.

AWS EC2 Auto Recovery Using CloudWatch

CloudWatch includes a powerful feature that enables auto recovery of an EC2 instance if it ever fails a system status check. A key benefit of this feature is that it relaunches an instance with the exact same configuration, preserving any auto-assigned public IP addresses and using the current instance volumes.

Modern Update (2026): Automatic recovery is now supported for most deployed Amazon EC2 instances. Most current-generation instances (Nitro-based) support Simplified Automatic Recovery, which can be configured directly from the EC2 Instance console without manually building a CloudWatch alarm from scratch.

Every EC2 instance is monitored for two distinct types of status checks that report as metrics to CloudWatch:

  • System status checks: These identify AWS infrastructure issues, such as hardware failures, network connectivity loss, or power outages in the data center.
  • Instance status checks: These identify software or configuration issues, such as corrupted file systems, incompatible kernels, or exhausted memory.

The auto recovery option specifically targets system status check failures. It enables the automated migration of an instance to a new physical host when the StatusCheckFailed_System metric enters an alarm state.

Requirements and Considerations

  • This feature requires VPC EBS-backed instances.
  • It is available for the majority of current instance types in all AWS regions.
  • Placement Groups: Recovered instances remain in their original placement group.
  • Notifications: It is highly recommended to link these alarms to an Amazon SNS topic to receive immediate alerts when a recovery event is triggered.

For the most up-to-date configuration steps, see the official Amazon EC2 Instance Recovery documentation.

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web Services. Referenced AWS services are the property of AWS. While I strive for accuracy, I disclaim liability for any disruption caused by errors or omissions.

Simple AWS Lambda Function to Snapshot All Attached EBS Volumes on an EC2 Instance

Automating EBS snapshots is a critical part of maintaining a resilient infrastructure. Below is a simple Python Lambda function that identifies all volumes attached to an EC2 instance and creates a snapshot for each.

Modern Update (2026): While custom Lambda scripts like this one are great for specific logic, AWS now recommends using Amazon Data Lifecycle Manager (DLM) for standardized snapshot automation. It is policy-driven and doesn't require maintaining custom code.

Lambda Function (Python)

import boto3
import datetime

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    # Replace with your Instance ID or logic to fetch it
    instance_id = 'i-xxxxxxx' 
    
    descriptions = ec2.describe_instances(InstanceIds=[instance_id])
    for reservation in descriptions['Reservations']:
        for instance in reservation['Instances']:
            for block_device in instance['BlockDeviceMappings']:
                vol_id = block_device['EBS']['VolumeId']
                description = f"Automated snapshot of {vol_id} from {instance_id} at {datetime.datetime.now()}"
                
                snapshot = ec2.create_snapshot(VolumeId=vol_id, Description=description)
                print(description)
                
    return "Finished automated snapshot of all attached volumes."

Required IAM Policy Document

Attach this policy to your Lambda Execution Role. Be sure to replace <***BUCKET NAME***> if your script interacts with S3 for logging or configuration.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateSnapshot",
        "ec2:DescribeInstances"
      ],
      "Resource": "*"
    }
  ]
}

AWS Policy Generator is a helpful tool if you want to further restrict these permissions.


NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web Services. Referenced AWS services are the property of AWS. While I strive for accuracy, I disclaim liability for any disruption caused by errors or omissions.

Solving Installation Problems with .NET 3.5 on Windows Server 2012 R2

While newer versions of Windows Server (2022/2025) have improved, installing .NET 3.5 still frequently fails because the payload is not part of the local Side-by-Side (SxS) store. The easiest way to solve this is to point the installer to the original installation media.

2026 Update: This method remains the gold standard for Windows Server 2022 and 2025. If you are in an air-gapped or WSUS-managed environment, the GUI will fail unless you provide this alternate path.

Option 1: The Fast Way (DISM)

Mount your Windows Server ISO (usually as drive D: or E:) and run this from an elevated Command Prompt. This forces the server to use the media instead of trying to reach Windows Update.

dism /online /enable-feature /featurename:NetFx3 /all /Source:D:\sources\sxs /LimitAccess

Option 2: The PowerShell Way

If you prefer PowerShell, use the following command. The -Source parameter is key here.

Install-WindowsFeature -Name NET-Framework-Core -Source D:\sources\sxs

Option 3: No Installation Media?

If you don't have the ISO, you can temporarily bypass WSUS to grab the files directly from Microsoft by changing a Group Policy setting:

  • Run gpedit.msc and go to: Computer Configuration > Administrative Templates > System.
  • Enable: "Specify settings for optional component installation and component repair".
  • Check the box: "Download repair content and optional features directly from Windows Update...".

Tip: If you are using a WIM file instead of a mounted drive, you can use the source format WIM:C:\install.wim:2 where "2" is the index of your server edition.

lsass.exe, failed with status code c0000417 on DISA STIG'd Server Resulting from "EnPasFltV2" Password Filter

If you're working with a Windows Server 2012/2012 R2 server that has had DISA Security Technical Implementation Guide (STIG) mitigations implemented and attempting to promote that server to a domain controller, you will very likely encounter an error that forces the server to reboot automatically.

If you see "A critical system process, C:\Windows\system32\lsass.exe, failed with status code c0000417" in your System log, it has been my experience that the password filter required by STIG ID: WN12-GE-000009 (Rule ID: SV-52104r1_rule, Vuln ID: V-1131) is the cause.

Crucial Step: To successfully provision a pre-STIG'd image as a domain controller, this password filter must be temporarily disabled.

To Disable the Password Filter:

  1. Open the Registry Editor (regedit.exe).
  2. Navigate to: HKLM\System\CurrentControlSet\Control\LSA
  3. Locate the Notification Packages value.
  4. Remove EnPasFltV2x86 and/or EnPasFltV2x64 from the list.
  5. Restart the server.

In a related note, very little documentation is available about the compatibility of "EnPasFltV2" with Windows Server 2012 R2. Do not assume that this password filter module is stable or compatible just because it is a STIG requirement. Using it alongside other tools, such as Microsoft LAPS, has been known to cause similar LSASS termination loops.


Reference: For more on password filter development and LSA notification packages, see the Microsoft Developer Documentation.

Getting Your Amazon Web Services (AWS) Simple Email Service (SES) Credentials

*** UPDATE: This project was migrated from CodePlex to GitHub ***

Obtaining Your Amazon SES SMTP Credentials can be more confusing than one would think. If you find yourself having difficulty authenticating to SES with the credentials that you got from the AWS Console, fret not, it's likely a simple fix.

It is possible to create an IAM user both from the IAM and SES area of the Console. Depending on the path you take, your SES user's username and any manually generated password may not be used for SES authentication. Your SES Access Key is used as the username; however, the related Secret Key is not used as-is for this purpose.

Required IAM Policy

Be sure you have given your IAM user the necessary permissions to relay email through SES. Use the following "least-privilege" policy snippet:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "ses:SendRawEmail",
      "Resource": "*"
    }
  ]
}

Manual Credential Generation

The Amazon Web Services (AWS) Simple Email Service (SES) SMTP Credential Generator uses your IAM user secret key to create a signing hash for sending raw email via SES. This signing token allows you to relay email through SES with the format specified by you at the time of sending. It does not store or otherwise send your credentials anywhere and is completely safe to use.


Note: The SendEmail permission enables a user to provide input via the API that SES uses to construct a message, whereas SendRawEmail enables a user to relay an already formatted email message (complying with RFC 5322).

Amazon Web Services (AWS) VPC, Security Group NACL, RouteTable Modeling Tool

I've found myself wondering on more than one occasion if a network issue I'm having is a result of security configuration (Routing Rules, Network ACLs, Security Groups) or something else entirely. To solve this, I originally wrote a network modeling tool to trace traffic flow from a source to a destination, simulating security evaluation at each hop.

Deprecation Notice (2026): This custom tool is no longer maintained and has been taken offline. AWS now offers native, high-fidelity services that provide this functionality with much greater accuracy:
  • VPC Reachability Analyzer: Best for point-to-point troubleshooting. It shows hop-by-hop details and identifies the exact blocking component (e.g., a specific NACL rule).
  • Network Access Analyzer: Best for security auditing. It helps you identify unintended network access and verify that your network segments are properly isolated.

While my tool served its purpose during the early days of VPC management, the native AWS analyzers now provide a more robust way to ensure your network configuration matches your intended connectivity.


Archive Note: For those interested in the logic behind traffic tracing, you can still view the original project structure in my blog-code repository.

ANSWERED: Amazon Web Services (AWS) Certified Solutions Architect (CSA) – Associate Level, Sample Exam Questions

There are many posts with various accounts from the AWS CSA exam, so I will try to keep mine concise and to the point. The exam requires a foundational understanding of all AWS services. Questions are situational and focused on technical nuances. Rather than a test of deep systems architecture, it is largely a test of your familiarity with the AWS product ecosystem.

Historical Perspective: This post was originally written in 2014. While the core concepts of S3, EC2, and ELB remain foundational, modern AWS exams (like SAA-C03) now include advanced topics like Serverless, Containers, and the Well-Architected Framework.

My studies began with the sample exam questions provided by AWS. Since AWS does not provide the answers to those samples, I've documented my research and answers for them below.

Sample Exam Question Deep Dive

  1. Amazon Glacier is designed for (Choose 2 answers)

    • Answer(s): B - Infrequently accessed data, C - Data archives.
    • Explanation: Glacier is an archival storage service. You are charged for data retrieval, so it's intended for data you don't expect to need more than once a month.
  2. If an instance fails a health check behind an ELB, what happens?

    • Answer(s): C - The ELB stops sending traffic to the instance that failed its health check.
    • Explanation: ELBs dynamically forward traffic only to healthy instances. Once a failure is detected, the ELB pulls the instance out of the rotation until it passes again.
  3. How can you serve confidential training videos in S3 via CloudFront without making S3 public?

    • Answer(s): A - Create an Origin Access Identity (OAI) for CloudFront and grant it access to the S3 objects.
    • Explanation: An OAI acts as a virtual user for your CloudFront distribution. By granting access to the OAI and blocking public access to the bucket, you ensure users must go through CloudFront.
  4. What occurs when an EC2 instance in a VPC with an Elastic IP is stopped and started? (Choose 2 answers)

    • Answer(s): B - All data on instance-store devices will be lost; E - The underlying host for the instance is changed.
    • Explanation: Because instance storage is physically attached to the host, that data is volatile. Stopping the instance releases the hardware reservation; starting it again provisions it on a new physical host.
  5. In the basic monitoring package for EC2, what metrics does CloudWatch provide?

    • Answer(s): D - Hypervisor visible metrics such as CPU utilization.
    • Explanation: AWS respects the guest OS boundary. Without an agent installed, CloudWatch can only see what the hypervisor sees—like CPU, Disk I/O, and Network I/O.

Reference: For the latest exam requirements, visit the Official AWS CSA Certification Page.

HOW TO: Setup IIS Web Server in Windows Server 2012 Using PowerShell

Automating IIS setup via PowerShell is a foundational skill for maintaining consistent web environments. While the core steps remain the same, modern Windows Server versions (2019/2022/2025) have introduced the IISAdministration module as the preferred path for automation.

Modern Update (2026): Microsoft now recommends the IISAdministration module over the legacy WebAdministration module. It is built for the PowerShell object pipeline, offering better performance and more reliable transactional commits.

1. Install the Web Server Role

This command installs the base IIS role along with the required management tools for PowerShell control.

Install-WindowsFeature -Name Web-Server -IncludeManagementTools

2. Create an Application Pool

If you are using a Group Managed Service Account (gMSA) for your pool identity, ensure your username ends with a $ (e.g., svc_webapp$).

# Modern IISAdministration method
Import-Module IISAdministration
$poolName = "MyApplicationPool"

# Create the pool
New-IISAppPool -Name $poolName

# Set identity (use gMSA or custom account)
$pool = Get-IISAppPool -Name $poolName
$pool.ProcessModel.IdentityType = "SpecificUser"
$pool.ProcessModel.UserName = "DOMAIN\svc_webapp$"
$pool.ProcessModel.Password = "" # Required blank for gMSA
$pool | Set-IISAppPool

3. Create the Website & SSL Binding

Modern IIS setup utilizes SNI (Server Name Indication), allowing you to host multiple SSL sites on a single IP address.

$siteName = "MyWebSite"
$path = "C:\inetpub\wwwroot\myapp"
$thumbprint = (Get-ChildItem cert:\LocalMachine\My | Where-Object { $_.Subject -like "*CN=mysite.com*" } | Select-Object -First 1).Thumbprint

# Create Site with HTTPS and SNI enabled
New-IISSite -Name $siteName -PhysicalPath $path -BindingInformation "*:80:" 
New-IISSiteBinding -Name $siteName -BindingInformation "*:443:mysite.com" -CertificateThumbPrint $thumbprint -Protocol https -SslFlag "Sni"

Note: Windows Server 2025 has deprecated the legacy IIS 6 Management Console. Ensure your automation scripts do not rely on Web-Lgcy-Mgmt-Console.

HOW TO: Create a Certificate Bundle for an F5 BIG-IP Local Traffic Manager (LTM)

When loading certificates into a BIG-IP LTM to configure trusted chains, you often need to create a certificate bundle. This bundle is attached to an SSL profile to advertise accepted certificates during an SSL handshake or to provide the full chain of trust to a client machine.

A certificate bundle is simpler than it sounds: it is merely a series of Base64 encoded certificates listed sequentially in a single text file. Follow these steps to create and import yours.

Manual Steps to Create a Bundle:

  1. Create a text file: Use a plain text editor like Notepad or TextEdit.
  2. Assemble the Chain: Copy the Base64 encoded text for each certificate in your chain (Server > Intermediate > Root) and paste them into the file one after another.
    • Ensure there are no extra spaces between the -----END CERTIFICATE----- and -----BEGIN CERTIFICATE----- tags.
  3. Navigate to Import: On your BIG-IP device, go to:
    System > Certificate Management > Traffic Certificate Management > SSL Certificate List > Import.
  4. Set Import Type: Choose Certificate.
  5. Name the Bundle: Give it a recognizable name (e.g., Corp_Chain_Bundle_2026).
  6. Upload or Paste: Either upload the text file you created or select Paste Text and copy the contents directly into the browser box.
  7. Click Import: The bundle is now ready to be selected in the "Chain" field of your SSL Profile.

Pro Tip: If you are managing many certificates, consider exploring the F5 Bundle Manager for automating updates to CA trust anchors.

HOW TO: Setup Active Directory Domain Services (AD DS) in Windows 2012 Using PowerShell

If you're looking to create an unattended installation scenario for Active Directory, one approach would be to script your installation using PowerShell. This article describes the installation steps for Active Directory Domain Services. While originally written for Server 2012, these steps remain the standard for modern Windows Server deployments.

2026 Update: For modern deployments (Windows Server 2022/2025), ensure your Forest and Domain functional levels are set to at least Win2016 or Win2025.

Preparation Steps for All Future Domain Controllers

1. Set Timezone Appropriately Using tzutil

tzutil /s "Eastern Standard Time"

2. Install AD DS Windows Role

Install-WindowsFeature -name AD-Domain-Services -IncludeManagementTools

3. Ensure AD DS Windows Service is set to Automatic

Set-Service -Name "NTDS" -StartupType "Automatic"

Configuring the Initial Domain Controller (New Forest)

The following script handles the promotion of the first DC. Note the DomainMode and ForestMode parameters—these define the minimum OS version allowed for future DCs in this forest.

$secureRestoreModePassword = ConvertTo-SecureString -string "<<Password>>" -AsPlainText -Force

Install-ADDSForest `
  -CreateDnsDelegation:$false `
  -DatabasePath "D:\Windows\NTDS" `
  -DomainMode Win2025 `
  -DomainName "corp.contoso.local" `
  -DomainNetbiosName "CORP" `
  -ForestMode Win2025 `
  -InstallDNS:$true `
  -LogPath "D:\Windows\NTDS" `
  -NoRebootOnCompletion:$false `
  -SafeModeAdministratorPassword $secureRestoreModePassword `
  -SysvolPath "D:\Windows\SYSVOL" `
  -Force:$true

Modern Note: Starting with Windows Server 2025, Active Directory now supports 32k page sizes for

HOW TO: Revert a Snapshot of an Instance In Amazon Web Services (AWS)

I write about this topic because it's one that may not be immediately obvious to those new to AWS and with previous virtualization experience. In AWS, there is a much looser tie between various components that make up a server.

An EC2 instance is essentially a reservation for processor power and memory. Persistent block storage (EBS) is associated with that instance through a device mapping. A snapshot is related to a volume, not the instance itself. If you want to "snapshot an instance," you are actually taking snapshots of each individual attached volume.

Snapshots are incremental; they capture only the blocks that have changed since the last snapshot. Because you cannot technically "revert" a volume in place using traditional methods, you must create a new volume from the snapshot and swap it with the existing one.

Modern Update (2026): AWS now supports a Replace Root Volume feature. You can now swap your root volume with a snapshot directly from the Actions > Monitor and troubleshoot menu without stopping the instance.

Manual Steps (The "Classic" Way):

  1. Open your AWS EC2 console and ensure you have the proper region selected.
  2. Identify the Volume ID of the root device you wish to revert. Note the device name (e.g., /dev/xvda).
  3. Shutdown the instance if it is still running (required for manual swaps).
  4. Go to the Snapshots pane, select your target snapshot, and choose Create Volume from Snapshot.
    • Note: Ensure the new volume is in the same Availability Zone (AZ) as your instance.
  5. Detach the old volume from the instance and attach the newly created volume using the exact same device name you noted in Step 2.
  6. Restart your instance.

Reference: For more on device mappings, see the AWS Device Naming Documentation.

Lessons Learned: .NET Framework Assembly Loading, Memory Optimization

Several of my colleagues and I recently began looking into how assemblies were loaded in the .NET CLR for a web application that we're working on. The application runs several instances within each of several application pools that all used the same set of large assemblies. What we found was, that since we'd paid little attention to how assemblies were loaded to that point, we had assemblies being loaded multiple times contributing to a huge memory footprint for the application.

To fully understand the problem and the partial solution we found, I'll review how an application in IIS and the .NET CLR is logically structured.

Logical Runtime Structure of IIS Hosted Websites

Global Assembly Cache (GAC)

The GAC is a cache of Common Language Infrastructure assemblies available to be shared by multiple applications. Assemblies loaded from the GAC are loaded with Full Trust. Note that in .NET 4, Code Access Security (CAS) has changed significantly and permissions are determined largely by the permissions of the executing account.

Application Pools

Application Pools (App Pools) are grouped sets of Web Applications under IIS that share the same W3WP worker process. Each application in an app pool runs under the same service account. Often sys admins create a separate application pool for each web application to create isolation. Generally, you'll have a single worker process for an application, though you can configure multiple (referred to as a Web Garden). Note: Generally you want to avoid Web Gardens as they can create all sorts of problems.

Thread

Within a worker process, one or more threads are available. Threads are like workers on an assembly line with App Domains being the different stations. A thread does all the work inside of an application and can only work on one App Domain at any given time, though they can bounce back and forth over the lifecycle of the application.

AppDomain

An application domain is a unit of isolation within the .NET framework. By default, every application has 1 AppDomain. Within an AppDomain, there are three contexts for assemblies:

  • default-load context: Resolved from the GAC or private application bin.
  • load-from: Loaded using the Assembly.LoadFrom method.
  • reflection-only: Loaded only for reflection purposes.

The CLR maintains a SharedDomain at the worker process level for assemblies it determines to be "domain-neutral." You can increase the chances of domain-neutral loading by placing assemblies in the GAC or using the LoaderOptimizationAttribute on the main method.

Now with all that background information out of the way, we found that placing those large assemblies in the GAC caused the CLR to load them as domain-neutral, sharing them in memory across applications in the same application pool. This significantly reduced our memory footprint and impacted resource demands on our servers.

The moral of the story is that we often don't pay enough attention to how our applications are deployed. While I'm not recommending the GAC for every solution, evaluating how your application uses resources can lead to a massive performance impact for your end users.


Reference Links: