Moving from ChatGPT to Claude

I’ve been using ChatGPT and Codex for a while and I’m starting to evaluate Claude. No conclusions yet. Getting started was rough. The desktop app would look like it was working and then throw “unable to connect,” and when I tried to subscribe the Stripe payment page wouldn’t load at all. Claude also had an outage around the same time, which didn’t help, but even after recovery things were still inconsistent. The real issue turned out to be my Pi-hole.

For anyone not familiar, Pi-hole is a DNS-level blocker. It sits in front of your devices and blocks ads, trackers, and a lot of third-party domains by default. That’s the point, but it also means anything that depends on those domains can partially or completely break.

Claude depends on more than just claude.ai. It pulls in additional domains for payments, telemetry, and chat, and Pi-hole blocks a lot of that in ways that fail silently. Quick way to confirm: disable Pi-hole for 30 seconds and reload. If everything suddenly works, that’s your cause.

Instead of chasing domains one at a time, I built a working allow list and published it here: https://github.com/J8k3/blog-code/blob/main/pi-hole/claude.txt. Add it and move on.

The outage and the Pi-hole blocking overlapped just enough to make debugging harder than it should have been. Separately, the status dashboard shows a steady stream of incidents. For something positioning itself as a serious productivity tool, that reliability signal matters, and right now it is not great. By comparison, ChatGPT comes across as more polished in this setup. In practice it appears to rely less on scattered third-party dependencies, so it behaves more predictably in a filtered network.

More once I’ve actually used it.

Cartes Bancaires Support in AWS Payment Cryptography

 AWS announced support for Cartes Bancaires in AWS Payment Cryptography. You can read the official announcement here: https://aws.amazon.com/about-aws/whats-new/2026/02/payment-cryptography-cartes-bancaires/

I was not there for the final announcement, but I was heavily involved in the work that made it possible.

A large part of that work was audit readiness and making sure the implementation would satisfy the requirements for approval. Cartes Bancaires was still developing their audit program while we were going through the process, which added complexity.

I worked directly with Deloitte through the review and was on every vendor call covering progress, evidence, gaps, and what was needed to get across the finish line. Most of the work is not visible from the outside. It was documentation, evidence, validation, follow-up, and making sure assertions matched what could actually be defended during audit. That work is rarely visible, but it is what makes launch possible.

Good Engineering Managers Don’t Leave the Technology Behind

As a manager of software teams, I’ve been asked many times about the transition from building software to managing the people who build it. It’s an existential question for a lot of engineers as they think about career growth: Do I become a manager, or do I stay technical?

That framing has always bothered me, because it assumes something I don’t believe to be true: that managing and continuing to develop technical depth are mutually exclusive.

It’s certainly possible to manage a software project with only surface-level technical knowledge. Plenty of organizations do exactly that. But in my experience, truly understanding the systems your engineers are working on is invaluable, both to the team and to the outcomes they deliver.

What Changes When You Become a Manager

The real shift isn’t from technical to non-technical. It’s a shift in how and where your technical skills are applied.

As an engineer, your leverage comes from the code you write. As a manager, your leverage comes from the decisions you enable, the risks you surface early, and the technical tradeoffs you help your team navigate.

That requires more than vocabulary-level familiarity. It requires enough depth to ask the right questions, recognize when complexity is creeping in, and understand when a problem is architectural versus organizational.

Why Technical Depth Still Matters

Teams know very quickly whether their manager actually understands the work. Not at the level of “could I implement this myself,” but at the level of “do you understand why this is hard, risky, or worth doing.”

Technical depth enables better judgment calls:

  • When a deadline is unrealistic versus merely uncomfortable.
  • When an incident is a one-off versus a systemic design issue.
  • When adding people will help, and when it will slow everything down.

It also builds trust. Engineers are far more willing to accept hard tradeoffs when they believe those decisions are informed by real understanding rather than abstraction.

The False Choice

The idea that you must choose between management and technical growth is mostly a product of how organizations structure careers, not a law of nature.

Some managers stop developing technically because their role no longer demands it. Others continue learning because it makes them better at prioritization, architecture discussions, and long-term planning.

I’ve always believed that the best engineering leaders stay close enough to the technology to understand its constraints, even if they are no longer the primary authors of the code.

Where I Landed

For me, management was not a departure from engineering. It was an expansion of scope.

The tools changed. The feedback loops got longer. But the core skill, understanding complex systems and helping them work better, remained the same.

If you’re an engineer facing this decision, my advice is simple: don’t assume you’re choosing between people and technology. The best managers I’ve worked with never did.

VPC Flow Logs: Use Them Intentionally

Note: I originally sketched this post years ago and never finished it. I’m publishing it now as a retrospective on how I think about VPC Flow Logs at scale.

VPC Flow Logs: Use Them Intentionally

For a long time, the default guidance in AWS environments was simple: enable VPC Flow Logs everywhere. At small to moderate scale, that advice is usually fine. At large scale, it becomes expensive, noisy, and often redundant.

There’s an inherent catch-22 with Flow Logs. If you don’t have them enabled, you miss historical data when you need it. If you enable them universally, you can generate massive volumes of duplicated traffic data that few teams ever analyze in a meaningful way.

At sufficient scale, AWS can perform network-level analysis across its infrastructure independent of whether an individual account is exporting Flow Logs. Because of that, Flow Logs are not always treated internally as a hard security requirement for every workload. I argued for that shift myself, mainly because the cost and operational overhead often outweighed the marginal benefit.

None of that makes Flow Logs pointless. It means they should be used deliberately.

Analysis vs. Compliance

I think of Flow Logs primarily as an analysis tool. If you have a compliance obligation that requires network traffic retention, or you have automated tooling that continuously analyzes Flow Logs for anomalies, they’re absolutely worth enabling.

If you don’t, you’re often collecting data “just in case,” with no realistic plan to review it. In that scenario, Flow Logs tend to become expensive cold storage rather than a security control.

Practical Guidance

  • If you’re unsure, enable them. When you’re actively troubleshooting or operating a high-risk environment, having the data is better than wishing you did.
  • Always manage retention. Don’t enable Flow Logs without S3 lifecycle policies. Expire or transition data aggressively. Thirty to ninety days is enough for most investigations.
  • Be honest about usage. If no one is looking at the data and no system is analyzing it, you’re paying for peace of mind, not security.

Used intentionally, VPC Flow Logs are valuable. Enabled blindly, they’re just another growing bucket of logs no one reads.

Amazon Web Services (AWS) Solutions Architect Professional Exam (SAP-C00)

Note: I originally wrote this post in 2017 but never published it at the time. Please note that the AWS certification landscape has changed significantly since then; this is provided strictly as a historical reference.

Another year another re:Invent. This year at re:Invent 2017, because my associate was up for renewal again, I decided to sit for the AWS Solutions Architect Professional Exam. The exam is in beta phase where questions are being tested, refined and the exam pass line is being set. I won't find out if I passed until March 2017 and I can't share actual exam questions but I can share advice for others that are interested in the exam in the future. Note that as of Jan 2017 the beta is currently closed as it's proved to be very popular.

Preparation:

I entered the exam cold, drawing only on my working knowledge of AWS and its services so my perspective should be an unbiased view of the exam. There is an exam blueprint but it's been pulled from the AWS website. 

Format:

  • ~3hr Exam Time
  • > 100 Questions
  • Reading Comprehension Questions
  • Question Nuances Where Important
  • Heavy Focus on Services and Service Components with Security Relationship:
    • IAM
    • WAF
    • CloudFront
    • ACM
    • Security Groups
    • NACLs
    • VPC
    • etc.

My Exam Perspective:

I found the questions to be very long, requiring significant reading and reading comprehension in order to answer questions. I also found the possible answers to be long and requiring reading comprehension. I had to read a number of questions at least twice to pickup on all of their nuances and be able to differentiate answer validity. The questions for the exam had some substantial parallels to security related questions on other exams.

AWS Payment Cryptography in Sydney and AS2805 Support

AWS announced AWS Payment Cryptography is now available in the Asia Pacific (Sydney) Region. You can read the official announcement here: https://aws.amazon.com/about-aws/whats-new/2025/12/aws-payment-cryptography-in-sydney/

This was a particularly difficult launch.

Not because of one major issue, but because there were a lot of moving pieces all happening at once. Hardware deployment issues, firmware rollout problems, feature dependencies colliding near the finish line, compliance requirements, launch timing, and the normal reality that things rarely line up as cleanly in practice as they do on a plan.

A lot of the work near launch was simply making sure everything that needed to happen actually happened, in the right order, without creating new problems somewhere else.

Regional expansion for a service like this is never just turning something on in another place. Every assumption around hardware, operations, and readiness gets tested again.

Those are usually the hardest launches. Not one big failure, just constant pressure across a dozen important things at once.

Launching AWS Payment Cryptography

AWS announced the launch of AWS Payment Cryptography this week, and I’ve had the opportunity to lead the service from its earliest definition through production launch. The official AWS announcement is here: https://aws.amazon.com/about-aws/whats-new/2023/06/aws-payment-cryptography/.

This was one of those projects where the hard part was never just building software. The challenge was defining a service that could meet the expectations of payment processors, issuers, and financial institutions who were used to a vastly different interaction model while operating inside the security, compliance, and operational standards required for payment cryptography.

My role started at the beginning: taking early customer input, writing the initial business requirements, and helping shape the architecture that would eventually become the service. That meant defining the threat model, establishing the security posture, and making early decisions around control-plane boundaries, data-plane design, hardware integration, and how HSM-backed infrastructure would operate inside AWS.

I also led the evaluation and selection of the HSM platform itself. That work involved deep vendor evaluation of the big three payment HSM vendors, prototype testing, operational modeling, and understanding what would actually work for a managed cloud service rather than simply replicating traditional on-premises approaches. 

As the service moved toward launch, a major focus became operational discipline. Observability, operational reviews, and HSM fleet health management were critical to making sure the system would hold up under real customer use, not just pass a design review. Several of the hardware-backed design patterns established during this work are already proving useful beyond this single service.

Launching AWS Payment Cryptography has been one of the most meaningful things I’ve worked on. It was a rare opportunity to help build something from ambiguity to durable production, where architecture, security, and execution all had to hold together at the same time.

EBS Key Rotation Strategies for KMS Master Keys

The AWS Key Management Service (KMS) provides a capability to manage encryption keys with transparent integration to many other AWS services. Of particular significance is transparent data at rest encryption for the AWS Elastic Block Store (EBS) service. When using KMS encryption, the data stream from the underlying storage medium is encrypted/decrypted at the hypervisor level. KMS is a regional AWS account bound service leveraging software key generation and underlying Hardware Security Module (HSM) appliances for encrypted storage of key material. Keys are decrypted as needed, stored in memory for the duration of the operation and then immediately erased from memory. The end user has the ability to effect controls over access to keys used for customer data encryption. As part of a security management plan, the customer may desire or have the requirement to effect a key rotation strategy. Keys may require rotation by policy with a defined validity period or could require rotation due to compromise. AWS encourages customers to encrypt all data both in motion and at rest. This ensures customer privacy while providing an ability to crypto erase data by deleting its encryption key as a mitigation to data spill or need to enforce deleted data protection.

Envelope Encryption

KMS uses an envelope encryption scheme providing two layers of protection for customer data and enabling access controls for the encrypted customer data itself. A data key is stored alongside the customer data that it encrypts in an envelope; containing key and cypher text. The data key is encrypted with a Customer Master Key (CMK) that has management facilities provided via the AWS Console, CLI and SDKs. In order to decrypt a dataset, an end user must have access to the CMK used to encrypt the data key with a forensic audit trail. Without the CMK, a data key cannot be decrypted and hence the underlying customer data cannot be decrypted. The CMK is itself encrypted by an AWS master key stored with physical security controls audited by third party for recovery of CMKs from the underlying HSM if a full region outage were ever to occur and the in-memory copy were lost. There are two different types of CMKs supported by KMS with the difference being ley material origin, these are an AWS generated CMK, “AWS managed CMK”, and a customer imported external CMK “customer managed CMK”.

EBS Service, Snapshots and AMIs

The EBS service provides block storage in the form of mountable volumes for Elastic Compute Cloud (EC2) instances. These volumes are similar to logical volumes presented to a virtual machine with underlying SAN storage. An EBS volume backup can be taken using a Snapshot feature that copies backups of blocks that have changed since the prior snapshot point offering an ability to capture the point in time contents of a volume. Snapshots are taken from an EBS volume and can be used to create new copies of that volume with identical data as of a given point in time. As such, they serve as the primary backup mechanism for EC2 instances as part of a disaster recoverability plan. Snapshots however are without instance configuration as they represent only a single volume and not the machine to which the source volume was attached. For this purpose , AWS offers an instance imaging feature which captures that instance’s current configuration and creates snapshots of all volumes attached to that instance producing what’s called an Amazon Machine Image (AMI). AMIs and Snapshots are store for you in an AWS managed S3 construct making both a regional service component. A copy feature is provided for both an AMI, really just an abstraction of the snapshots tied to the AMI, and an individual snapshot that allows the enablement of encryption and selection of a master key. This selection causes KMS to generate a data key for each of the specific snapshots in question, which is used to encrypt the customer data.

KMS AWS Managed CMK Behaviors

An AWS managed CMK is generated by and populated with key material through the software constructs of the KMS service. A default AWS managed key is automatically generated at the point of AWS account creation for each service that offers native KMS integration across each AWS region. AWS managed CMKs offer an automated rotation feature, when enabled, with a 1-year life cycle. That rotation process would cause the generation of new key material and the archival of old key material within that CMK. This old, deprecated key material is maintained to facilitate decryption of data keys previously encrypted with that key material. The new key material is then used for all encryption operations on customer data for new data sets going forward. In this process, the data keys for customer data are not rotated and existing data keys are not decrypted and re-encrypted with the new key material.

KMS Customer Managed CMK Behaviors

A customer managed CMK is generated as an empty container to which a customer can import a 256-bit symmetric encryption key. The advantage here being that the customer can generate a key either in software or thru their own hardware key management tools. The customer must maintain a copy of that key within their own IT infrastructure and is solely responsible for recovery of that key in the event of a full AWS region outage by re-importing the key material to the CMK. A customer managed CMK can only contain a single set of key material and does not have the capability to manage a history of key material as the AWS managed CMKs do. The ability to re-import key material to such a CMK is provided purely for recovery purposes and if different key material were ever uploaded, all customer data with the CMK in its envelope chain previously encrypted by that CMK would not be decryptable.

Key Rotation Strategies

When using AWS managed CMKs, the customer has minimal control over the deprecation of legacy key material. Though automated rotation does phase out the future use of legacy key material, it does not cause a re-encryption of data keys and customer data remains encrypted with the same data key. Use of customer managed CMKs enables a greater degree of control over key material source but places a responsibility on the customer to implement a key rotation strategy. In each scenario, there are strategies, leveraging behaviors of the KMS service to force certain desired rotation outcomes.

Rotating Only A CMK

For either an AWS managed or customer managed CMK, a customer can effect complete master key rotation. In the case of AWS managed CMKs, this includes phasing out use of legacy key material. A strategy to achieve this outcome would be to initiate a copy of source customer data within the same region and account. This does not cause a change to data keys or cause customer data re-encryption however; it does cause data keys to be re-encrypted with the new CMK.

Rotating Both A CMK and Data Key

For either an AWS managed or customer managed CMK, a customer can effect complete master and data key rotation. In the case of AWS managed CMKs, this would completely phase out use of legacy key material. A strategy to achieve this outcome would be to initiate a copy of source customer data that crosses an AWS Account and/or AWS Region boundary and then additional copy back to the source. This will cause a new data key to be generated at the destination and enable the selection of an arbitrary CMK at the destination. Note that this will require configuration of KMS CMK access permissions, see https://aws.amazon.com/blogs/aws/new-cross-account-copying-of-encrypted-ebs-snapshots/. Following the reverse process to copy data back to the source will cause a new data key to be generated in the source AWS Account/AWS Region for the customer data and enable the selection of a new CMK in the source AWS Account/AWS Region. Cross region copies would result in cross region data transfer charges.

Conclusion

It is possible to implement a key rotation strategy that meet security and/or compliance requirements through manipulation of AWS service behaviors. The rule of thumb being that data copy actions enable new key selection and crossing an AWS account or AWS Region boundary causes customer data re-encryption with new keys.

About the Author: Jacob Marks is an engineering leader with over 20 years of experience, including a decade at Amazon Web Services (AWS) where he led teams in EC2 Core Platform and the development of the AWS Payment Cryptography service.