RAXE-2026-045 GUIDE CRITICAL S3

Hardening Your Environment Against Software Supply Chain Attacks

Supply Chain 2026-03-31 M. Hirani TLP:GREEN

Context: The TeamPCP Supply Chain Campaign

In March 2026, a threat actor known as TeamPCP executed one of the most aggressive open-source supply chain campaigns ever observed. Over the course of a single week, the group chained compromises across five ecosystems: GitHub Actions, Docker Hub, npm, OpenVSX, and PyPI. The campaign escalated from compromised credentials tied to an earlier incident affecting Aqua Security's Trivy release pipeline; Aqua later stated that the initial remediation was incomplete. Each subsequent compromise yielded credentials that unlocked the next target.

The campaign is notable for its deliberate targeting of security-adjacent software: a vulnerability scanner (Trivy), an IaC analyser (Checkmarx KICS), an LLM proxy (LiteLLM), and a telephony SDK (Telnyx). These tools run with elevated privileges by design – they need broad access to scan, analyse, or proxy API keys. Compromising them gives the attacker the trust the organisation has already granted to the tool.

Many stages of the campaign reused a similar credential-stealing and persistence pattern: harvest credentials on the host (SSH keys, cloud tokens, Kubernetes secrets, CI/CD variables, cryptocurrency wallets, database credentials, .env files), encrypt the haul with hybrid RSA+AES, exfiltrate to an attacker-controlled domain, and install persistent systemd backdoors that poll for additional binaries. However, delivery and tradecraft varied across stages – LiteLLM used a malicious .pth file for Python startup execution, Telnyx employed WAV-based payload steganography, the npm wave featured a self-propagating worm (CanisterWorm), and the Trivy/KICS stages used GitHub Action tag hijacking with Runner.Worker process memory scraping.

Why this matters to your organisation: LiteLLM alone is present in 36% of cloud environments (Wiz data). It is pulled as a transitive dependency by AI agent frameworks, MCP servers, and LLM orchestration tools. You may be affected without ever having explicitly installed the package.

This guide distils the lessons from this campaign and from broader supply chain threat research into ten actionable hardening measures grouped by priority. Five are baseline controls every organisation should implement immediately. Three are maturity controls that scale with organisational readiness. Two are conditional or campaign-specific controls that apply to specific technology stacks. Each recommendation includes a verdict on its effectiveness and practical implementation guidance.


Am I Affected Right Now?

Before reading the hardening measures, answer these questions to determine whether you need to triage immediately or can proceed to hardening.

1. Do you use any of these components? - LiteLLM (Python package or as a transitive dependency of AI agent frameworks, MCP servers, or LLM orchestration tools) - Telnyx Python SDK - Aqua Trivy (binary, Docker image, or GitHub Actions: trivy-action, setup-trivy) - Checkmarx KICS or AST (GitHub Actions: kics-github-action, ast-github-action, or OpenVSX extensions: ast-results, cx-dev-assist)

If no to all → proceed to hardening measures. You are not directly affected, but the controls in this guide will protect you against the next incident.

2. If yes – did you install, update, or pull any of these during the compromise windows? - LiteLLM: 24 March 2026, ~08:30–11:25 UTC - Telnyx: 27 March 2026, 03:51–10:13 UTC - Trivy binary/images/actions: 19–22 March 2026 (see Affected Packages table for exact windows) - Checkmarx actions/extensions: 23 March 2026, 12:58–16:50 UTC (KICS), 02:53–15:41 UTC (OpenVSX)

If yes or unsure → treat the environment as compromised. Go directly to the Detection Commands section, run them now, and follow the IR playbook (Measure 5).

3. Not sure if you have a transitive dependency?

Many teams don't install LiteLLM directly – it gets pulled in by other packages. Check:

# Python – check if litellm is installed and what pulled it in
pip show litellm 2>/dev/null && pipdeptree -r -p litellm 2>/dev/null

# Python – check for telnyx
pip show telnyx 2>/dev/null && pipdeptree -r -p telnyx 2>/dev/null

# Node.js – check for litellm in dependency tree
npm ls litellm 2>/dev/null

If any of these return results, check the version against the Affected Packages table below.

Cross-platform note: The persistence mechanisms (systemd services, .pth files) are Linux-specific. However, the credential harvester executes on any operating system where the compromised package is imported – including macOS and Windows. If you ran a compromised version on macOS or Windows, the initial data theft (credentials, SSH keys, environment variables, cloud tokens) still occurred even though no persistence artefacts were dropped. Treat credentials as exposed regardless of OS.


Start Here by Role

Not every reader needs every section. Find your role below and start with the measures that matter most to you.

Role Start with Then review
CISO / Head of Security Quick-Reference Matrix, then Measures 1, 2, 3, 5 Campaign context for board-level briefing
Incident Response / DFIR Measure 5 (IR playbook) IoCs, Detection Commands, Removal Commands
SOC / Detection Engineering Measures 4, 8 (runtime + egress detection) Infrastructure and Persistence IoCs – build alerts from these
AppSec / DevSecOps Measures 1, 6, 7 (pin → registry → artefact provenance) Detection Commands for CI/CD workflow auditing
CI/CD / Build Engineering Measure 3 (CI/CD hardening) Detection Commands – grep your workflows for affected actions
Cloud Security Measures 2, 4, 8 (identity, runtime, egress) IMDSv2 guidance in Measure 2; C2 IPs in IoC table
Platform / SRE Measures 4, 5 (runtime + IR playbook) Persistence artefacts and rebuild guidance in Removal Commands
Kubernetes Platform Owner Measure 9 directly IoCs – search for node-setup-* pods in kube-system now
Python / AI Engineer Measures 1, 10 If you use LiteLLM, MCP servers, or AI agent frameworks – check transitive exposure
Founder / Small Team Quick-Reference Matrix Then do the 5 baselines this week: pin deps, isolate creds, harden CI/CD, enable runtime monitoring, write a simple IR playbook

Action Timeline

First 24 hours (if you suspect exposure or are responding to the campaign now): - Run the Detection Commands against all environments. Check for compromised package versions, persistence artefacts, and C2 traffic. - If any indicator is found, execute the IR playbook (Measure 5): preserve evidence, quarantine, rotate ALL credentials, hunt for persistence. - Block known C2 domains and IPs at the network perimeter.

First 7 days (baseline hardening): - Pin all dependencies to exact versions with hash verification (Measure 1). - Replace long-lived secrets with workload identity federation and short-lived tokens (Measure 2). - Pin all GitHub Actions to full commit SHAs; audit workflow permissions and secrets (Measure 3). - Deploy or verify runtime monitoring coverage for base64 execution, systemd writes, and unexpected outbound connections (Measure 4).

First 30 days (maturity controls): - Stand up or configure an internal package registry with approval gates (Measure 6). - Implement egress filtering and DNS monitoring on CI/CD runners and production workloads (Measure 8). - Audit Kubernetes RBAC, Pod Security Standards, and service account token mounting if applicable (Measure 9). - Audit Python .pth files and other interpreter startup hooks across all environments (Measure 10).


Persona Quick-Start Guides

These compressed guides give specific personas their first actions without reading the full document. Each links back to the relevant measures for detail.


For CISOs and Security Leaders

You don't need to run the commands yourself. You need to confirm the right controls are in place and ask the right questions this week.

Five questions to ask your team:

  1. "Are all production dependencies pinned to exact versions with hash verification, or are we pulling latest?" (→ Measure 1)
  2. "If a PyPI package we depend on was compromised tomorrow, how would we know – and how long would it take us to rotate every credential it could touch?" (→ Measures 4, 5)
  3. "Are our GitHub Actions pinned to commit SHAs, or are we referencing mutable version tags?" (→ Measure 3)
  4. "Do our CI/CD runners use long-lived publishing tokens, or have we moved to OIDC?" (→ Measures 2, 3)
  5. "Do we have a supply chain incident playbook, and when did we last test it?" (→ Measure 5)

Board-level summary: In March 2026, a single incompletely rotated credential led to a cascading compromise across five software ecosystems, affecting security tools present in 36% of cloud environments. The attacker harvested cloud credentials, API keys, SSH keys, and database passwords from every affected host. Organisations that pinned dependencies, isolated credentials, and had tested IR playbooks limited their exposure. Those that didn't are still rotating secrets. The five baseline measures in this guide cost less than a week of engineering time and address the root causes directly.


For Incident Responders – First 60 Minutes

If you're triaging a suspected TeamPCP compromise right now:

Minutes 0–15: Run the detection commands from the Detection Commands section. Focus on: pip show litellm, pip show telnyx, persistence artefacts (~/.config/sysmon/, /tmp/pglog), and node-setup-* pods in kube-system.

Minutes 15–30: If any indicator is found, begin evidence preservation – capture copies of persistence files, systemd units, and relevant logs. Do not delete yet. Start network log review for C2 domains and IPs listed in the IoC table.

Minutes 30–45: Begin credential rotation. Start with the highest-privilege credentials: cloud IAM keys, Kubernetes service account tokens, CI/CD publishing tokens, SSH keys. Treat ALL secrets accessible on the affected host as exposed – the payload harvests everything, not just what it needs.

Minutes 45–60: Quarantine affected package versions at your internal registry (or block at the network level if no internal registry). Notify stakeholders that an IR is in progress. Begin assessing notification obligations (Measure 5, step 9).

Then follow the full IR playbook (Measure 5) for enumeration, persistence hunting, rebuild decisions, and post-incident review.


For SOC and Detection Engineers – Alerts to Build Now

Map these TeamPCP behaviours to your detection stack:

Behaviour Log Source Alert Logic
DNS queries to C2 domains DNS logs, protective DNS Query for models.litellm.cloud, checkmarx.zone, scan.aquasecurtiy.org
Connections to C2 IPs Firewall, proxy, flow logs Outbound to 83.142.209.203:8080, 45.148.10.212
Process decodes and executes base64 EDR, Falco, auditd base64 -d piped to python, or subprocess.run with base64 payload
Writes to systemd user service path EDR, file integrity monitoring File creation in ~/.config/systemd/user/ by non-systemd process
Creates or modifies .pth files File integrity monitoring Write to any site-packages/*.pth file
Privileged pod creation in kube-system Kubernetes audit logs Pod create in kube-system by non-system service account, especially with privileged: true
IMDS credential access CloudTrail (AWS), VPC flow logs Requests to 169.254.169.254 from non-infrastructure processes
GitHub repo creation named tpcp-docs GitHub audit log Repository create event matching tpcp-docs or docs-tpcp
Encrypted archive exfiltration Proxy logs, NDR HTTP POST with header X-Filename: tpcp.tar.gz

Start with the DNS and C2 IP alerts – they are the highest-signal, lowest-effort detections. Then layer in the runtime and Kubernetes alerts.


For Founders and Small Teams – Minimum Viable Hardening

If you're a small team without a dedicated security function, here's your week:

Monday – Pin your dependencies (2 hours)

pip install pip-tools
pip-compile --generate-hashes requirements.in -o requirements.txt
# From now on: pip install --require-hashes -r requirements.txt
# Commit requirements.txt to git

For container images, switch any :latest or :version tags to digest-pinned references (@sha256:...).

Tuesday – Isolate your secrets (2 hours) Move API keys and database credentials out of .env files and environment variables. Use your cloud provider's secrets manager (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault). If you're on AWS, enforce IMDSv2 on all EC2 instances.

Wednesday – Harden your CI/CD (2 hours) Open every .github/workflows/*.yml file. Replace action version tags with full commit SHAs. Audit which secrets each workflow can access and remove any it doesn't need. If you publish packages, switch to OIDC Trusted Publishers.

Thursday – Set up runtime monitoring (2 hours) If you're on Kubernetes, install Falco and enable default rules. If not, ensure your hosts have an EDR agent and that it alerts on: base64 decode to subprocess, writes to systemd paths, and unexpected outbound HTTPS connections.

Friday – Write your IR playbook (1 hour) It doesn't need to be long. A one-page document covering: who to call, how to preserve evidence, how to quarantine a package, how to rotate all credentials, and when to notify customers or regulators. Put it somewhere everyone can find it. Then run through it once as a team.

That's five days, roughly 9 hours of work. It addresses every root cause in the TeamPCP campaign.


For Python and AI Engineers

If you build with Python, run AI agents, use MCP servers, or depend on LLM orchestration tools – this campaign targeted your stack specifically.

Check your exposure now:

# Is litellm in your environment? (You may not have installed it directly)
pip show litellm 2>/dev/null && echo "INSTALLED – check version"
pipdeptree -r -p litellm 2>/dev/null  # what pulled it in?

# Is telnyx in your environment?
pip show telnyx 2>/dev/null && echo "INSTALLED – check version"

Three things to do this week:

  1. Pin everything. If your requirements.txt has unpinned or loosely pinned dependencies (litellm>=1.0), lock them now with pip-compile --generate-hashes. This is the single most effective control against this class of attack.

  2. Audit your .pth files. Run this in every Python environment you maintain:

python3 -c "import site; [print(p) for p in site.getsitepackages()]" | while read sp; do
  find "$sp" -name '*.pth' -exec grep -l "^import " {} \;
done

Any .pth file with an import line that you don't recognise should be investigated.

  1. Check your container images. If you pull Python base images or tool images (like Trivy) from public registries, pin them by digest, not tag. A compromised tag is silent; a mismatched digest is loud.

Top 10 Hardening Measures

The measures below are grouped into three tiers based on how broadly they apply and how quickly they should be adopted. Tier 1 controls are baseline – every organisation should implement these regardless of size or maturity. Tier 2 controls are strong but scale with organisational maturity; they deliver high value but require process investment. Tier 3 controls are conditional on your technology stack or are campaign-specific hunting techniques.

All ten are real, validated, and would have made a material difference against the TeamPCP campaign. The tier structure helps you prioritise, not skip.


Tier 1 – Baseline Controls (Implement Immediately)

These five measures are foundational. They apply to every organisation, require low-to-medium effort, and directly address the mechanics that made the TeamPCP campaign possible.


1. Pin Dependencies to Exact Versions and Verify Hashes

Owners: AppSec, DevOps, Engineering – anyone who manages dependency files or Dockerfiles.

The single most impactful defence against the TeamPCP-class attack. Both LiteLLM and Telnyx were compromised by publishing new patch versions to PyPI. Any environment running pip install litellm without a pinned version would have silently pulled the malicious release. LiteLLM's own advisory notes that its official Docker image was not impacted because that path pinned dependencies via requirements.txt rather than pulling the latest release from PyPI. Pinning to exact versions (e.g., litellm==1.82.6) with cryptographic hash verification ensures that even if a malicious version is published, your builds will not pull it.

Verdict: Essential. This is the first line of defence and the most neglected.

How:

Packages: Generate a locked dependency file with exact pins and hashes, and enforce it with pip's hash-checking mode (--require-hashes). The recommended workflow is to use pip-compile --generate-hashes (from pip-tools) to produce a requirements file with pinned versions and cryptographic hashes, then install with pip install --require-hashes -r requirements.txt. In npm, use npm ci with a locked package-lock.json. For Go, the go.sum file provides hash verification by default. Always commit lock files to version control.

Container images: The same principle applies to container registries. Reference images by digest (image@sha256:abc123...) rather than mutable tags (:latest, :v0.69.4). Trivy v0.69.4 was pushed to Docker Hub, GHCR, and ECR Public – any Dockerfile or Kubernetes manifest pulling aquasecurity/trivy:latest or a version tag during the compromise window would have silently fetched the backdoored image. Digest pinning prevents this.

Important caveat: Hash pinning protects you from silently pulling a tampered version, but it does not protect you when you intentionally upgrade. If your team runs pip-compile --upgrade on a weekly cadence, a malicious version published during that window will pass hash checks because it is a new hash for a new version. Combine pinning with a review and approval process for dependency updates, ideally gated through an internal registry (see Measure 6).


2. Isolate Credentials from Build and Runtime Environments

Owners: Cloud Security, Platform/SRE, DevOps – anyone who provisions cloud identity, manages secrets, or configures CI/CD runners.

The TeamPCP payload harvested everything it could find: environment variables, .env files, cloud credentials, SSH keys, Kubernetes secrets, and database passwords. The reason this was so damaging is that many organisations leave credentials in places that any process on the host can read. Credential isolation – using short-lived tokens, secrets managers with least-privilege policies, and workload identity federation – dramatically reduces what an attacker can steal even if they achieve code execution. The Singapore CSA advisory explicitly states: if a compromised component was installed or ran in your environment, treat all secrets accessible to that environment as exposed and rotate them immediately.

Verdict: Critical. Credential hygiene is the difference between 'code ran' and 'full compromise'.

How: Replace long-lived API keys with workload identity federation (AWS IAM Roles for Service Accounts, GCP Workload Identity, Azure Managed Identity). Use a secrets manager (Vault, AWS Secrets Manager) with scoped, short-lived credentials instead of .env files. Minimise long-lived secrets in environment variables, prefer scoped or ephemeral identity, and avoid exposing credentials broadly across the host or runner context. For CI/CD, use OIDC tokens instead of static secrets for publishing to PyPI, npm, and Docker registries. On AWS specifically, enforce IMDSv2 (Instance Metadata Service v2) – the TeamPCP payload included a full SigV4 signing routine to steal IAM role credentials from the metadata service, and IMDSv2's session-token requirement makes this significantly harder for scripts that don't already have SDK-level IMDS access. Rotate all credentials immediately after any suspected compromise.


3. Audit and Harden CI/CD Pipelines

Owners: CI/CD Engineering, DevOps, AppSec – anyone who writes or maintains GitHub Actions, GitLab CI, or Jenkins pipelines.

The TeamPCP campaign originated from a compromised CI/CD workflow in Trivy that exposed a Personal Access Token. That single token enabled the entire chain of subsequent compromises across five ecosystems. CI/CD pipelines are high-value targets because they run with publishing credentials, have network access, and execute third-party code (actions, plugins, test suites). Treating your CI/CD pipeline with the same rigour as a production server is not optional. GitHub's own secure-use guidance states that pinning to a full-length commit SHA is the only immutable way to reference an action.

Verdict: Essential. CI/CD is the most common entry point for supply chain attacks.

How: Pin all GitHub Actions to full commit SHAs. Prefer pull_request for untrusted PR testing. Only use pull_request_target when privileged context is genuinely required, and never combine it with checkout or execution of untrusted PR code. Scope CI/CD tokens to the minimum permissions required (read-only where possible). Never persist publishing tokens as long-lived secrets; use OIDC Trusted Publishers. Run builds in ephemeral, sandboxed runners that are destroyed after each job. Audit which workflows have access to which secrets. Additionally, consider hardened runner images (e.g., StepSecurity Harden-Runner) – the Trivy payload read directly from /proc/<pid>/mem of the Runner.Worker process to extract secrets, bypassing GitHub's log masking. Restricting process memory access on runners mitigates this specific technique.


4. Enable Runtime Monitoring and Behavioural Detection

Owners: SOC, Cloud Security, Platform/SRE – anyone who operates detection tooling, EDR, or CWPP agents.

The TeamPCP payload performed several highly anomalous actions at runtime: decoding and executing base64 blobs, writing to systemd unit files, querying the EC2 metadata service, deploying privileged pods, and making HTTPS POST requests to unknown domains. A runtime security tool monitoring for these behaviours would have flagged the compromise within seconds of execution, even if the package passed all pre-deployment checks. This is primarily a detection-and-response control – it catches compromises that prevention controls miss.

Verdict: Essential. Pre-deployment scanning alone is insufficient; you need eyes on what code actually does at runtime.

How: Deploy runtime security agents (Falco, Tetragon, commercial EDR/CWPP solutions) that monitor for suspicious process execution, unexpected file writes to systemd paths, metadata service access from non-infrastructure processes, and anomalous outbound connections. For AI systems specifically, data-plane controls can complement host and runtime telemetry, but they do not replace workload-level detection for persistence, service-account abuse, or pod creation.


5. Establish an Incident Response Playbook for Supply Chain Compromise

Owners: Security Leadership, IR/DFIR, all team leads – this is an organisational control, not a single-team task.

The TeamPCP campaign demonstrated that incomplete remediation is worse than no remediation. Aqua's initial response to the February Trivy compromise left residual access that TeamPCP exploited three weeks later to launch the full campaign. When a supply chain compromise is detected, the response must assume the blast radius extends to every credential and system the compromised software could have touched. Half-measures create a false sense of security.

Verdict: Essential. The difference between a contained incident and a month-long campaign.

How: Your playbook should include:

  1. Preserve evidence – capture forensic evidence and relevant logs before deleting persistence artefacts, unless immediate containment risk outweighs preservation.
  2. Quarantine – immediately block affected versions at your internal registry.
  3. Enumerate exposure – identify every system that installed or ran the compromised package, including CI/CD job history, build logs, and base-image rebuild windows (not just current package presence). Transient CI jobs and historical builds matter just as much as what is currently installed.
  4. Rotate ALL credentials – rotate every credential accessible on affected hosts, not just the ones you think were targeted.
  5. Hunt for persistence – search for systemd units, cron jobs, .pth files, privileged pods, and any other persistence mechanisms documented in the campaign IoCs.
  6. Review network logs – check for connections to known C2 domains and IPs.
  7. Consider rebuilding – in many cases, rebuilding affected systems from a known-clean state is the safest course of action.
  8. Post-incident review – specifically examine whether containment was complete and whether any residual access remains.
  9. Assess notification obligations – determine whether the compromise triggers notification requirements under applicable regulations (e.g., NIS2, DORA, GDPR breach notification, sector-specific rules) or customer contracts. Engage legal counsel early – do not wait until the technical investigation is complete to begin this assessment.

Tier 2 – Maturity Controls (Scale With Organisational Readiness)

These three measures deliver strong security value but require process investment – an internal registry needs governance workflows, artefact comparison requires build infrastructure, and egress monitoring needs network visibility. They are not day-one requirements for every team, but any organisation operating AI workloads or running CI/CD at scale should be working toward them.


6. Run an Internal/Private Package Registry (PyPI, npm, etc.)

Owners: AppSec, DevOps, Platform/SRE – whoever controls the build pipeline and developer tooling.

Running an internal registry (sometimes called a private PyPI mirror or Artifactory/Nexus repository) that proxies upstream registries is a strong containment and governance layer. It gives you a single control point where you can approve packages before they enter your environment, cache known-good versions, and block packages that fail integrity checks. It also means you are not pulling directly from public PyPI at build time, which gives you time to react when a compromise like this is disclosed. Critical distinction: a caching proxy alone does not solve the problem – a simple pass-through will fetch and cache the malicious version on first request. The protection comes from combining the proxy with an approval and promotion gate that reviews new versions before making them available to internal consumers.

Verdict: Very good. High-value for any team running production AI workloads or CI/CD pipelines.

How: Use Artifactory, Nexus Repository, or AWS CodeArtifact as a proxy/cache for PyPI and npm. Configure pip to use your internal index: set PIP_INDEX_URL to your private registry URL and block direct access to pypi.org at the network level. For npm, use npm config set registry or an .npmrc file pointing to your internal registry. Combine with an approval workflow that reviews new package versions before promoting them to the internal index.


7. Compare Distributed Artefacts Against Source Repositories

Owners: AppSec, Release Engineering – teams responsible for build provenance and dependency vetting.

The TeamPCP attack injected code into the PyPI wheel that was not present in the upstream GitHub repository. LiteLLM's own security update confirms that the GitHub source install path was not compromised – only the PyPI-distributed releases were. The attacker regenerated the RECORD file inside the wheel so that standard integrity checks against the wheel's own metadata passed. Source-to-artefact comparison is one of the most reliable ways to detect release-channel tampering for this class of attack.

Verdict: Highly effective for detecting release-channel tampering. Best positioned as a high-maturity control for crown-jewel dependencies, incident response forensics, and internal package publishing – not something most teams will do for every third-party dependency on every build. Other campaign stages (GitHub Action tag hijacking, OpenVSX compromise) are better addressed through immutable references, attestations, and provenance controls.

How: For Python, use tools like diffoscope to compare wheels against source builds. Adopt reproducible builds where possible. For GitHub Actions, pin actions to full commit SHAs rather than mutable tags (e.g., actions/checkout@a5ac7e51b41... not actions/checkout@v4) – full-length commit SHAs are the only immutable way to reference an action. Use Sigstore/cosign to verify artefact provenance. PyPI and GitHub both now support provenance attestations – check whether critical dependencies provide them and verify against policy. Enable PyPI Trusted Publishers (OIDC-based publishing) to eliminate long-lived API tokens from CI pipelines.


8. Monitor for Anomalous Outbound Network Connections

Owners: SOC, Cloud Security, Network Engineering – whoever manages firewall rules, DNS monitoring, and proxy logs.

The TeamPCP malware exfiltrated stolen credentials to attacker-controlled domains (models.litellm.cloud, checkmarx.zone, scan.aquasecurtiy.org) and IPs (83.142.209.203:8080, 45.148.10.212). If your build or production environments had been monitored for unexpected outbound connections, the exfiltration would have been caught. Egress filtering and DNS monitoring are low-cost, high-value controls that many organisations skip.

Verdict: Very good. One of the fastest ways to detect a compromise in progress.

How: Implement egress filtering using network policies (Kubernetes NetworkPolicy, AWS Security Groups, firewall rules) to restrict outbound connections to an allowlist. Monitor DNS queries for newly registered or suspicious domains. Where operationally feasible, use a transparent proxy for HTTPS egress and log all outbound connections from CI/CD runners and production workloads. On build runners and other tightly controlled workloads, treat connections to newly registered domains as high-priority review events – this is a strong heuristic, not a universally reliable rule for all production traffic. Limitation: some TeamPCP campaign stages (notably the CanisterWorm npm worm) used an ICP blockchain canister as decentralised C2, which does not resolve to a traditional domain and evades reputation-based blocking. Behavioural detection at the host level (Measure 4) is the complementary control for those cases.


Tier 3 – Conditional and Campaign-Specific Controls

These two measures are real and validated, but they apply to specific technology stacks or specific attack techniques observed in this campaign. Implement them if the condition applies to your environment.


9. Implement Kubernetes-Specific Hardening

Owners: Kubernetes Platform, SRE, Cloud Security – whoever manages cluster configuration, RBAC, and admission policies.

Applies to: organisations running Kubernetes.

The TeamPCP malware included a dedicated Kubernetes lateral movement module. If a Kubernetes service account token was present, it enumerated all nodes and deployed privileged pods to each one. The pods mounted the entire host filesystem, had full host network and PID access, and tolerated all taints – meaning they could schedule on control plane nodes. Many clusters run with default service account configurations that make this trivially easy.

Verdict: Critical for any organisation running Kubernetes.

How: For pods that do not need Kubernetes API access, disable automatic service account token mounting (automountServiceAccountToken: false). Enforce Pod Security Standards (Restricted profile) to block privileged containers, hostPID, hostNetwork, and host filesystem mounts. Use OPA Gatekeeper or Kyverno to enforce policies at admission time. Restrict RBAC permissions so workload service accounts cannot create pods, list secrets, or enumerate nodes. Use network policies to restrict workload pod access to the Kubernetes API server – the TeamPCP payload needed API access to enumerate nodes and create pods, so blocking or limiting that path stops the lateral movement even if a service account token is present. Enable Kubernetes audit logging and alert on pod creation in kube-system by non-system identities.


10. Audit Interpreter Startup Hooks and Auto-Execution Paths

Owners: Python/AI Engineering, AppSec, Platform/SRE – anyone maintaining Python environments, AI agent frameworks, or runtime configurations.

Applies to: organisations running Python workloads. The principle extends to any language runtime with automatic startup execution mechanisms.

LiteLLM 1.82.8 introduced a particularly dangerous technique: a malicious .pth file that executes arbitrary code on every Python process startup, even if litellm is never imported. This bypasses import-time hooks that many security tools monitor. Python's documentation confirms that any .pth file placed in site-packages with a line starting with import will execute at interpreter startup via site.py. The broader principle applies beyond Python: any language runtime that supports automatic startup hooks (e.g., Node.js --require, Ruby's RUBYOPT, Java's agent flags) presents a similar risk. For Python environments, .pth files are the most immediate and under-monitored vector.

Verdict: Good targeted control. Addresses a specific, dangerous, and under-monitored attack vector in Python environments. The broader principle – auditing interpreter auto-execution paths – is worth applying to any runtime.

How: Periodically audit all site-packages directories (including virtual environments) for unexpected .pth files:

# Check all site-packages paths, not just the primary one
python3 -c "import site; [print(p) for p in site.getsitepackages()]" | while read sp; do
  find "$sp" -name '*.pth' 2>/dev/null
done

Any executable line in a .pth file should be reviewed – not just lines containing subprocess or base64, since any import statement can be abused. You can also use python -v during triage to see which .pth files are being processed. For advanced or specialised runtime environments, running Python with the -S flag disables site.py processing and prevents .pth execution, but this is a compatibility-sensitive option that should be tested before deployment and is not appropriate for all workloads. Containerised environments with immutable filesystems are another effective mitigation. For other runtimes, audit startup flags (--require, RUBYOPT, JAVA_TOOL_OPTIONS) and ensure they are not writable by application-layer dependencies.


Quick-Reference Matrix

Tier # Measure Effort Impact Would It Likely Have Prevented, Detected, or Limited Impact?
Baseline 1 Pin versions + hash verification Low High Prevented – blocked pull of malicious PyPI versions
Baseline 2 Credential isolation Medium Critical Limited impact – reduces what is stolen even if code executes
Baseline 3 CI/CD hardening Medium Critical Prevented (initial stage) – SHA pinning and token scoping would have blocked initial credential theft
Baseline 4 Runtime behavioural detection Medium Critical Detected – flagged base64 execution, persistence writes, and C2 callouts post-compromise
Baseline 5 IR playbook for supply chain Low Critical Limited impact – complete remediation prevents chaining to subsequent stages
Maturity 6 Internal package registry Medium High Prevented (if approval gate active) – catches bad versions before promotion
Maturity 7 Source-to-artefact comparison Medium High Detected – identified injected code in PyPI wheels; less directly applicable to tag-hijack stages
Maturity 8 Egress monitoring Low High Detected – flagged exfiltration to C2 domains and IPs
Conditional 9 Kubernetes hardening Medium Critical Prevented (lateral movement) – blocked privileged pod deployment
Conditional 10 Interpreter startup hook audit Low Medium Detected – specific to LiteLLM v1.82.8 .pth vector; principle applies to other runtimes

Key Indicators of Compromise (IoCs)

Search your environments for these indicators immediately. This section covers the components named in this guide. For the most current and comprehensive consolidated list, refer to the Singapore CSA advisory (AD-2026-001) and the vendor-specific security updates linked in the Sources section.

Affected Packages and Versions

Component Compromised Versions Known-Safe Versions Ecosystem
litellm 1.82.7, 1.82.8 1.82.6 PyPI
telnyx 4.87.1, 4.87.2 4.87.0 PyPI
Trivy binary v0.69.4 v0.69.3 (see Aqua advisory) GitHub Releases, Deb, RPM
Trivy container images v0.69.4 v0.69.3 (see Aqua advisory) Docker Hub, GHCR, ECR Public
Trivy container images (Docker Hub only) v0.69.5, v0.69.6 v0.69.3 (see Aqua advisory) Docker Hub
aquasecurity/trivy-action 75 of 76 version tags (pre-remediation) v0.35.0+ GitHub Actions
aquasecurity/setup-trivy All 7 tags (pre-remediation) v0.2.6+ GitHub Actions
checkmarx/kics-github-action All 35 tags (pre-remediation) v2.1.20 GitHub Actions
checkmarx/ast-github-action 91 tags (pre-remediation) v2.3.33 GitHub Actions
ast-results (OpenVSX) v2.53.0 See Checkmarx advisory OpenVSX
cx-dev-assist (OpenVSX) v1.7.0 See Checkmarx advisory OpenVSX

Infrastructure and Persistence IoCs

Type Indicator
C2 Domains models.litellm.cloud, checkmarx.zone, checkmarx.zone/raw, scan.aquasecurtiy.org (note: typo is the actual domain)
C2 IP:Port (Telnyx) 83.142.209.203:8080
C2 IP (Trivy) 45.148.10.212
Persistence (filesystem) ~/.config/sysmon/sysmon.py, ~/.config/systemd/user/sysmon.service
Persistence (tmp) /tmp/pglog, /tmp/.pg_state
Kubernetes Pods named node-setup-* in kube-system namespace
.pth file litellm_init.pth in site-packages directory
Exfil archive tpcp.tar.gz (HTTP header: X-Filename: tpcp.tar.gz)
Systemd service "System Telemetry Service" (sysmon.service, user scope)
GitHub fallback exfil Repositories named tpcp-docs or docs-tpcp in your GitHub organisation
CVE CVE-2026-33634 (CVSS 9.4) – added to CISA KEV catalogue

Detection Commands

Note: These commands check current package state. For CI/CD and ephemeral environments, also review CI job history, build logs, and container image rebuild timestamps during the compromise windows to identify historical exposure – credentials exposed during a transient build may still be valid.

Cross-platform reminder: The detection and removal commands below are Linux/systemd-focused. The credential harvester runs on any OS – see the cross-platform note in the "Am I Affected?" section above. If you ran a compromised version on macOS or Windows, persistence artefacts won't be present but credentials were still stolen. Rotate them.

# === STEP 1: Check if affected packages are installed ===

# Check for compromised litellm versions
pip show litellm 2>/dev/null | grep -i version
pip freeze 2>/dev/null | grep litellm

# Check for compromised telnyx versions
pip show telnyx 2>/dev/null | grep -i version
pip freeze 2>/dev/null | grep telnyx

# === STEP 2: Check for TRANSITIVE dependencies ===
# You may have litellm or telnyx without having installed them directly.
# pipdeptree shows what pulled them in.
pip install pipdeptree --break-system-packages -q 2>/dev/null
pipdeptree -r -p litellm 2>/dev/null   # shows what depends ON litellm
pipdeptree -r -p telnyx 2>/dev/null    # shows what depends ON telnyx

# For Node.js environments:
# npm ls litellm 2>/dev/null

# === STEP 3: Check for container image exposure ===
# If you pull Trivy images, check which digest you have:
docker images --digests | grep -i trivy 2>/dev/null
# Compare against known-good digests from Aqua's advisory.

# === STEP 4: Check for persistence artefacts (Linux) ===

# Check for .pth file (LiteLLM v1.82.8 specific) – scan ALL site-packages paths
python3 -c "import site; [print(p) for p in site.getsitepackages()]" 2>/dev/null | \
  while read sp; do find "$sp" -name "litellm_init.pth" 2>/dev/null; done

# Broader .pth audit – flag any .pth file containing executable import lines
python3 -c "import site; [print(p) for p in site.getsitepackages()]" 2>/dev/null | \
  while read sp; do
    find "$sp" -name "*.pth" -exec sh -c '
      for f; do
        if grep -qE "^import " "$f" 2>/dev/null; then
          echo "REVIEW: $f contains executable import line(s):"
          grep -n "^import " "$f"
        fi
      done
    ' _ {} +
  done

# Check for persistence artefacts
ls -la ~/.config/sysmon/sysmon.py 2>/dev/null
ls -la ~/.config/systemd/user/sysmon.service 2>/dev/null
ls -la /tmp/pglog /tmp/.pg_state 2>/dev/null
systemctl --user status sysmon.service 2>/dev/null

# === STEP 5: Check for Kubernetes lateral movement ===
# Check for attacker pods in Kubernetes (also review audit logs for short-lived pods)
kubectl get pods -n kube-system | grep node-setup

# === STEP 6: Check for CI/CD and network exposure ===
# Check for GitHub Actions exposure (search workflow files for affected actions)
grep -r "aquasecurity/trivy-action\|aquasecurity/setup-trivy\|Checkmarx/kics-github-action\|Checkmarx/ast-github-action" .github/workflows/

# Check network logs for C2 traffic
# Search DNS/proxy/firewall logs for:
#   models.litellm.cloud, checkmarx.zone, scan.aquasecurtiy.org (domains)
#   83.142.209.203:8080 (Telnyx C2), 45.148.10.212 (Trivy C2)

# Check for fallback exfiltration repositories
# Search your GitHub organisation for repos named tpcp-docs or docs-tpcp

Removal Commands

Important caveats before running these commands:

  • These commands are Linux/systemd-specific.
  • Preserve evidence before deletion where possible – capture copies of persistence artefacts and relevant logs for forensic analysis.
  • Removal alone is not sufficient. You must also rotate all secrets accessible on affected hosts, reinstall known-good package versions, and verify that no other persistence mechanisms remain.
systemctl --user stop sysmon.service
systemctl --user disable sysmon.service
rm -f ~/.config/sysmon/sysmon.py
rm -f ~/.config/systemd/user/sysmon.service
rm -f /tmp/pglog /tmp/.pg_state
systemctl --user daemon-reload

# Remove malicious .pth file if present
python3 -c "import site; [print(p) for p in site.getsitepackages()]" 2>/dev/null | \
  while read sp; do rm -f "$sp/litellm_init.pth" 2>/dev/null; done

# Reinstall known-good package versions
pip install litellm==1.82.6 --force-reinstall
pip install telnyx==4.87.0 --force-reinstall

Campaign status as of 31 March 2026: Public advisories (including the Singapore CSA) still treat the TeamPCP campaign as ongoing. No additional ecosystem compromises have been publicly confirmed since the Telnyx incident on 27 March, but further pivots remain plausible given the volume of credentials already harvested. Any CI/CD pipeline that ran a compromised Trivy or KICS action during the March 19–23 window may have exposed publishing tokens for other registries. Assume the blast radius is larger than the known compromises.


Sustaining These Controls

The measures in this guide address the immediate threat. Supply chain security is ongoing. After you have implemented the baseline controls and responded to any direct exposure, sustain your posture with these practices:

  • Monitor dependency changes continuously. Use tools like Dependabot, Renovate, Socket, or Endor Labs to get automated alerts when dependencies update, are flagged as malicious, or exhibit suspicious behaviour. Don't rely on manual review alone.
  • Subscribe to security advisories for critical dependencies. Follow the GitHub Security Advisory feed, PyPI advisory database, and npm audit for the packages in your dependency tree. For Trivy, LiteLLM, and Checkmarx specifically, monitor the vendor advisory pages linked in the Sources section.
  • Review controls periodically. Dependency pins drift, CI/CD permissions creep, and egress rules accumulate exceptions. Schedule a quarterly review of the controls you have implemented to ensure they are still enforced.
  • Run tabletop exercises for supply chain scenarios. Your IR playbook (Measure 5) is only useful if the team has practised it. Run a tabletop at least once using a scenario similar to TeamPCP: "A critical dependency published a malicious patch version – what do we do in the first 60 minutes?"
  • Track supply chain threat intelligence. Follow feeds from Wiz, Endor Labs, Socket, Datadog Security Labs, and Sonatype for emerging supply chain campaigns. The TeamPCP playbook will be reused by other actors – early awareness of new compromises gives you hours of lead time.

Sources and Further Reading

Vendor Advisories: - Aqua Security – Trivy supply chain incident discussion (github.com/aquasecurity/trivy/discussions/10425) and GitHub Security Advisory GHSA-69fq-xp46-6x23 - Checkmarx – Security Update: KICS, ast-github-action, OpenVSX extensions (checkmarx.com/blog/checkmarx-security-update, 24 Mar 2026) - LiteLLM – Official Security Update (docs.litellm.ai/blog/security-update-march-2026, 25 Mar 2026) - Telnyx – Python SDK Security Notice (telnyx.com/resources, 27 Mar 2026)

Government Advisories: - Singapore CSA – Ongoing 'TeamPCP' Supply-Chain Campaign, AD-2026-001 (csa.gov.sg, 27 Mar 2026) - CISA – CVE-2026-33634 added to Known Exploited Vulnerabilities catalogue

Research and Analysis: - Wiz Research – Trivy Compromised: TeamPCP Supply Chain Attack (wiz.io/blog, 19 Mar 2026) - Wiz Research – KICS GitHub Action Compromised (wiz.io/blog, 23 Mar 2026) - Wiz Research – TeamPCP trojanizes LiteLLM (wiz.io/blog, 24 Mar 2026) - Endor Labs – TeamPCP Isn't Done: Full technical analysis (endorlabs.com/learn, 24 Mar 2026) - Datadog Security Labs – LiteLLM and Telnyx compromised on PyPI (securitylabs.datadoghq.com, 24 Mar 2026; updated 27 Mar 2026) - Microsoft Security Blog – Detecting, investigating, and defending against the Trivy supply chain compromise (microsoft.com, 24 Mar 2026)


This guide was produced by RAXE AI (raxe.ai) as a resource for security teams and customers. RAXE AI builds on-device, privacy-first AI runtime security – bringing enforcement to the data plane. For questions or to discuss your environment's exposure, contact us at raxe.ai.


Changelog

Version Date Changes
1.0 31 Mar 2026 Initial publication: 10 hardening measures, IoCs, detection/removal commands
1.1 31 Mar 2026 Fixed Trivy root-cause phrasing; corrected IoC table (Trivy v0.69.4 images, Telnyx C2 IP, Trivy C2 IP); fixed .pth detection command; corrected source dates and references (Aqua #10425, GHSA, Checkmarx, CSA)
1.2 31 Mar 2026 Restructured measures into three tiers (Baseline / Maturity / Conditional); added IMDSv2, runner memory protection, API server network restriction, ICP C2 limitation; widened Measure 10 to interpreter startup hooks; added automountServiceAccountToken scoping
1.3 31 Mar 2026 Added role-based routing table and 24h/7d/30d action timeline
2.0 31 Mar 2026 Added "Am I Affected?" triage section; container image digest pinning; transitive dependency discovery commands; cross-platform exposure notes; notification obligations in IR playbook; "Sustaining These Controls" section; changelog
2.1 31 Mar 2026 Added persona quick-start guides (CISO leadership questions, IR first 60 minutes, SOC alert-building table, founder minimum-viable checklist, Python/AI engineer guide); added owner tags to all 10 measures; web-publication formatting for raxe.ai/labs