Executive Summary
What: A high-severity vulnerability in PyTorch's weights_only unpickler (CVE-2026-24747, CVSS 8.8 HIGH) allows an attacker to craft a malicious checkpoint file that, when loaded with torch.load(..., weights_only=True), corrupts heap memory and can lead to arbitrary code execution (NVD). The weights_only=True parameter was specifically introduced as a safety mechanism to prevent code execution during model loading — this vulnerability defeats that protection entirely (GHSA-63cw-57p8-fm3p).
So What: PyTorch is a major deep learning framework (RAXE assessment). Organisations that adopted weights_only=True as a security control when loading third-party model checkpoints now have a false sense of security. Two independent security researchers have publicly posted about weaponised exploit development targeting this vulnerability [1][5]. Combined with RAXE-2026-015 (PickleScan Bypass), both the framework's built-in safety mechanism and the principal third-party scanning tool for serialised ML files are assessed as broken (RAXE assessment).
Now What: Upgrade to PyTorch 2.10.0 or later immediately (GHSA-63cw-57p8-fm3p). Audit all workflows that load model checkpoints from untrusted sources. Deploy the detection rules provided in this report to monitor for exploitation attempts. Review RAXE-2026-015 (PickleScan Bypass) for compounding supply chain risk.
Risk Rating
| Dimension | Rating | Basis |
|---|---|---|
| Severity | HIGH (CVSS 8.8) |
AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H (NVD) |
| Urgency | HIGH | Unverified public claims of weaponised exploit development by two independent researchers [1][5] |
| Scope | BROAD | All PyTorch versions prior to 2.10.0 on all platforms (NVD) |
| Confidence | HIGH | CVE confirmed, GHSA published by PyTorch maintainer, fix released (GHSA-63cw-57p8-fm3p) |
| Business Impact | HIGH | Remote code execution in ML inference and training pipelines; supply chain compromise vector (NVD) |
CVSS v3.1 Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H (NVD)
| Metric | Value |
|---|---|
| Attack Vector | Network (NVD) |
| Attack Complexity | Low (NVD) |
| Privileges Required | None (NVD) |
| User Interaction | Required (NVD) |
| Scope | Unchanged (NVD) |
| Confidentiality | High (NVD) |
| Integrity | High (NVD) |
| Availability | High (NVD) |
Affected Products
| Package | Registry | Vulnerable Range | Fixed Version | Source |
|---|---|---|---|---|
torch |
PyPI | All versions < 2.10.0 | 2.10.0 | GHSA-63cw-57p8-fm3p |
Weaknesses:
- CWE-94: Improper Control of Generation of Code (NVD)
- CWE-502: Deserialisation of Untrusted Data (NVD)
Am I Affected?
- [ ] Do you use PyTorch (
torch) in any environment? - [ ] Is your installed version earlier than 2.10.0?
- [ ] Do you load model checkpoint files (
.pth,.pt,.bin,.ckpt) from external sources? - [ ] Do you rely on
weights_only=Trueas a security control? - [ ] Do you download pre-trained models from public repositories (e.g., Hugging Face Hub)?
If you answered yes to any combination of the first two questions plus any of the remaining three, you are affected.
Check your version:
python3 -c "import torch; print(torch.__version__)"
Abstract
CVE-2026-24747 is a memory corruption vulnerability in PyTorch's weights_only restricted unpickler. The vulnerability arises from two independent flaws: (1) the unpickler permits SETITEM and SETITEMS serialisation opcodes to be applied to non-dictionary types, causing type confusion; and (2) declared storage sizes are not validated against actual storage allocations, enabling heap corruption through oversized boundary reads/writes (GHSA-63cw-57p8-fm3p). These flaws combine to allow an attacker to craft a malicious .pth checkpoint file that achieves arbitrary code execution when loaded via torch.load(..., weights_only=True) (NVD).
The vulnerability is particularly significant because weights_only=True was introduced as the recommended defence against deserialisation attacks in PyTorch model loading (GHSA-63cw-57p8-fm3p). Its compromise removes a key safety control in the ML model loading workflow. This finding is related to RAXE-2026-015 (PickleScan Bypass), which demonstrated that the principal third-party scanning tool for serialised ML files can also be circumvented — together, these findings indicate that the ML model loading supply chain is compromised at multiple defensive layers.
The vulnerability was reported by Ji'an Zhou (azraelxuemo) and patched in PyTorch 2.10.0 (GHSA-63cw-57p8-fm3p). It was published in the NVD on 2026-01-27 (NVD).
Key Findings
-
Safety mechanism defeated: The
weights_only=Trueparameter — PyTorch's recommended defence against arbitrary code execution during model loading — is bypassed by this vulnerability (GHSA-63cw-57p8-fm3p). -
Two independent flaws combine for RCE: Opcode type confusion (
SETITEM/SETITEMSon non-dictionary types) and storage size mismatch (declared vs. actual allocation) combine to corrupt heap memory, enabling arbitrary code execution (GHSA-63cw-57p8-fm3p). -
Reported weaponisation activity: Two independent security researchers have publicly posted about exploit development for this CVE. @N3mes1s claimed a "fully weaponized" exploit as of 2026-03-04 [1]. @evilsocket stated that an AI tool was "very close to fully weaponize" the CVE as of 2026-03-03 [5]. These claims have not been independently verified and no public proof-of-concept code has been released.
-
Multi-layer defence failure (RAXE assessment): Combined with
RAXE-2026-015(PickleScan Bypass), both PyTorch's built-in safety mechanism and the principal external scanning tool for serialised ML files are assessed as broken for the PyTorch model loading workflow. -
Broad attack surface: All PyTorch versions prior to 2.10.0 across all platforms are affected, with no user privileges required and low attack complexity (NVD:
AV:N/AC:L/PR:N/UI:R).
Attack Flow
┌─────────────────────┐
│ Attacker crafts │
│ malicious .pth file │
│ (checkpoint) │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Distribution via │
│ model hub, repo, │
│ or direct transfer │
│ (AV:N) │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Victim calls │
│ torch.load(path, │
│ weights_only=True) │
│ (UI:R) │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Flaw 1: SETITEM/ │
│ SETITEMS opcodes │
│ applied to non-dict │
│ → type confusion │
│ (GHSA-63cw-57p8) │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Flaw 2: Storage │
│ size mismatch → │
│ heap corruption │
│ (GHSA-63cw-57p8) │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Arbitrary code │
│ execution in victim │
│ process context │
│ (C:H/I:H/A:H) │
│ (NVD) │
└─────────────────────┘
MITRE ATLAS: AML.T0010 — AI Supply Chain Compromise
Kill Chain Mapping:
- Delivery: Malicious .pth file distributed via model repositories or direct transfer (NVD: AV:N)
- Exploitation: torch.load() processes crafted opcodes, triggering type confusion and heap corruption (GHSA-63cw-57p8-fm3p)
- Execution: Arbitrary code runs in the context of the Python process (NVD: C:H/I:H/A:H)
Technical Details
Vulnerability Root Cause
The PyTorch weights_only unpickler is a restricted deserialisation mechanism that limits which Python serialisation opcodes and global references can be resolved during model loading. It was designed to prevent the well-known arbitrary code execution risks inherent in Python's serialisation protocol (GHSA-63cw-57p8-fm3p).
The vulnerability comprises two independent implementation flaws in this restricted unpickler:
Flaw 1 — Opcode Type Confusion:
The unpickler permits SETITEM and SETITEMS serialisation opcodes to be applied to objects that are not dictionaries. In normal operation, these opcodes should only modify dictionary objects on the unpickler's stack. By targeting a non-dictionary type with these opcodes, an attacker induces type confusion within the unpickler's internal object graph (GHSA-63cw-57p8-fm3p).
Flaw 2 — Storage Size Mismatch: The unpickler does not validate that declared storage sizes correspond to the actual storage allocations in the checkpoint archive. An attacker can declare a storage element count that exceeds the actual allocated memory region. When the runtime subsequently reads or writes tensor data using the declared (oversized) boundary, it accesses heap memory beyond the allocated buffer (GHSA-63cw-57p8-fm3p).
Exploitation
These two flaws combine to corrupt heap memory within the Python process executing torch.load(). Successful exploitation achieves arbitrary code execution with the privileges of the running process (NVD).
The attack requires:
1. The attacker to craft a malicious .pth checkpoint file containing the exploiting opcode sequence (GHSA-63cw-57p8-fm3p).
2. The victim to load the file using torch.load(path, weights_only=True) (NVD: UI:R).
3. No prior access to the victim's system is required (NVD: PR:N).
The specific opcode sequences required for exploitation are not publicly documented at time of writing. However, two independent security researchers have publicly posted about weaponised exploit development [1][5]; these claims remain unverified.
PyTorch Checkpoint File Format
PyTorch checkpoint files (.pth, .pt) are ZIP archives containing serialised tensor data and metadata. The internal structure typically includes:
- archive/data.pkl — serialised Python objects describing the model's state dictionary
- archive/data/0, archive/data/1, ... — raw tensor storage data
The weights_only unpickler restricts which Python globals can be resolved during deserialisation of data.pkl, limiting them to known-safe PyTorch functions such as torch._utils._rebuild_tensor_v2 and torch.storage._load_from_bytes. The vulnerability bypasses this restriction through the opcode and storage size flaws described above (GHSA-63cw-57p8-fm3p).
Impact
| Impact | Rating | Detail | Source |
|---|---|---|---|
| Confidentiality | HIGH | Full read access to process memory | NVD |
| Integrity | HIGH | Arbitrary code execution in process context | NVD |
| Availability | HIGH | Process crash or denial of service | NVD |
Confidence & Validation
| Criterion | Status | Detail |
|---|---|---|
| Vendor Confirmed | Yes | GHSA published by PyTorch maintainer (malfet) (GHSA-63cw-57p8-fm3p) |
| CVE Assigned | Yes | CVE-2026-24747, status: Analysed (NVD) |
| Public PoC | No | No public proof-of-concept code identified at time of writing |
| Patch Available | Yes | Fixed in PyTorch 2.10.0 (GHSA-63cw-57p8-fm3p) |
| Exploited in Wild | Unknown | No KEV listing; unconfirmed claims of weaponisation from independent researchers [1][5] |
Admiralty Grade: B2 — Usually reliable source, probably true
- Source reliability: B (Usually Reliable) — NVD (grade A) and GHSA published by PyTorch maintainer. Weaponisation intelligence from independent Twitter researchers (grade C-D individually).
- Information credibility: 2 (Probably True) — Vulnerability confirmed, patched, and CVE assigned. Weaponisation claimed by two independent researchers [1][5] but no public PoC released; claims remain unverified.
Reporter: Ji'an Zhou (azraelxuemo) (GHSA-63cw-57p8-fm3p)
Detection Signatures (Formal Rules)
Rule 1: YARA — Suspicious PyTorch Checkpoint Opcodes
Limitations: This YARA rule provides exploratory hunting capability only. The underlying exploit leverages opcode type confusion and storage size mismatch, which may not produce traditional payload signatures detectable by string matching. Treat matches as leads for analyst investigation, not as confirmed detections. This rule detects known code execution payload strings commonly associated with deserialisation attacks. Exploits targeting CVE-2026-24747 specifically leverage opcode type confusion and storage size mismatch rather than injecting traditional code execution globals. A sophisticated exploit may achieve arbitrary code execution through heap corruption alone, without embedding any of the signature strings above. This rule should be considered a baseline indicator, not a comprehensive detection. No publicly available exploit samples were identified for signature development at time of writing.
Detects .pth files containing code execution payloads or non-standard global references (GHSA-63cw-57p8-fm3p).
rule RAXE_2026_019_PyTorch_WeightsOnly_Exploit_Indicators
{
meta:
description = "Detects potential CVE-2026-24747 exploit indicators in PyTorch checkpoint files"
author = "RAXE Labs"
date = "2026-03-09"
reference = "https://nvd.nist.gov/vuln/detail/CVE-2026-24747"
advisory = "GHSA-63cw-57p8-fm3p"
severity = "high"
atlas_technique = "AML.T0010"
filetype = "pytorch_checkpoint"
strings:
// PyTorch checkpoint files are ZIP archives
$zip_magic = { 50 4B 03 04 }
// Standard torch rebuild functions (present in legitimate files)
$global_torch = "torch._utils._rebuild_tensor_v2"
$global_storage = "torch.storage._load_from_bytes"
// STACK_GLOBAL opcode (0x93) — used to resolve arbitrary globals
$stack_global = { 93 }
// Strings associated with code execution via deserialisation
$exec_os = "os.system" ascii
$exec_subprocess = "subprocess" ascii
$exec_eval = "builtins.eval" ascii
$exec_exec = "builtins.exec" ascii
$exec_import = "__import__" ascii
$exec_getattr = "builtins.getattr" ascii
$exec_apply = "apply" ascii
condition:
$zip_magic at 0 and
(
// Pattern A: Known code execution payload strings in checkpoint
any of ($exec_*) or
// Pattern B: STACK_GLOBAL opcode referencing non-standard modules
($stack_global and not $global_torch and not $global_storage)
)
}
Rule 2: Sigma — Suspicious torch.load Process Behaviour (Post-Exploitation Hunting)
Post-exploitation hunting rule. Detects suspicious process spawning from PyTorch model loading, which may indicate successful exploitation of CVE-2026-24747. This rule identifies post-exploitation activity, not the initial exploit delivery; false positives are expected in ML training environments (NVD: CVE-2026-24747).
title: PyTorch Checkpoint Loading Followed by Suspicious Process Spawn
id: raxe-2026-019-sigma-001
status: experimental
description: |
Detects when a Python process loading PyTorch checkpoints spawns
unexpected child processes, indicating potential exploitation of
CVE-2026-24747 (weights_only unpickler memory corruption leading
to arbitrary code execution). Reference: NVD CVE-2026-24747,
GHSA-63cw-57p8-fm3p.
author: RAXE Labs
date: 2026/03/09
references:
- https://nvd.nist.gov/vuln/detail/CVE-2026-24747
- https://github.com/pytorch/pytorch/security/advisories/GHSA-63cw-57p8-fm3p
tags:
- attack.execution
- attack.t1059
- cve.2026.24747
- atlas.aml.t0010
logsource:
category: process_creation
product: linux
detection:
selection_parent:
ParentImage|endswith:
- '/python'
- '/python3'
- '/python3.10'
- '/python3.11'
- '/python3.12'
- '/python3.13'
ParentCommandLine|contains:
- 'torch.load'
- 'torch'
- 'train.py'
- 'inference.py'
- 'model_loader'
selection_child:
Image|endswith:
- '/sh'
- '/bash'
- '/zsh'
- '/dash'
- '/curl'
- '/wget'
- '/nc'
- '/ncat'
- '/python'
- '/python3'
- '/perl'
- '/ruby'
condition: selection_parent and selection_child
falsepositives:
- Legitimate ML training scripts that invoke shell commands
- Jupyter notebook kernels running system commands
- CI/CD pipelines that wrap model loading with shell scripts
level: medium
Rule 3: Sigma — Untrusted PyTorch Checkpoint File Access (Delivery-Phase Hunting)
Delivery-phase hunting rule. Detects checkpoint files accessed from untrusted locations, which may indicate the delivery phase of an exploitation attempt. This is a hunting heuristic, not close-to-IOC detection — legitimate ML workflows routinely access checkpoints from cached and temporary paths (ATLAS: AML.T0010).
title: PyTorch Checkpoint Loaded from Untrusted Location
id: raxe-2026-019-sigma-002
status: experimental
description: |
Detects file access to PyTorch checkpoint files (.pth, .pt, .bin)
from temporary directories, download folders, or world-writable paths.
Loading checkpoints from untrusted sources is the delivery mechanism
for CVE-2026-24747. Reference: NVD CVE-2026-24747 (AV:N, UI:R).
author: RAXE Labs
date: 2026/03/09
references:
- https://nvd.nist.gov/vuln/detail/CVE-2026-24747
- https://github.com/pytorch/pytorch/security/advisories/GHSA-63cw-57p8-fm3p
tags:
- attack.initial_access
- attack.t1195.002
- cve.2026.24747
- atlas.aml.t0010
logsource:
category: file_access
product: linux
detection:
selection_process:
Image|endswith:
- '/python'
- '/python3'
selection_file:
TargetFilename|endswith:
- '.pth'
- '.pt'
- '.bin'
- '.ckpt'
selection_path:
TargetFilename|contains:
- '/tmp/'
- '/var/tmp/'
- '/Downloads/'
- '/cache/'
- '.cache/huggingface/'
condition: selection_process and selection_file and selection_path
falsepositives:
- Legitimate model downloads cached by Hugging Face transformers library
- Temporary files created during model conversion workflows
- CI/CD pipelines that stage model files in temporary directories
level: medium
Rule 4: YARA — PyTorch Checkpoint Storage Size Anomaly
Exploratory hunting rule. Detects storage metadata inconsistencies that may indicate attempts to exploit the storage size mismatch flaw. Large integer patterns in checkpoint files are not inherently malicious; this rule generates leads for manual review (GHSA-63cw-57p8-fm3p).
rule RAXE_2026_019_PyTorch_Storage_Size_Anomaly
{
meta:
description = "Detects PyTorch checkpoint files with anomalous storage declarations that may exploit CVE-2026-24747 storage size mismatch"
author = "RAXE Labs"
date = "2026-03-09"
reference = "https://nvd.nist.gov/vuln/detail/CVE-2026-24747"
advisory = "GHSA-63cw-57p8-fm3p"
severity = "medium"
atlas_technique = "AML.T0010"
filetype = "pytorch_checkpoint"
strings:
$zip_magic = { 50 4B 03 04 }
$archive_data = "archive/data" ascii
$archive_data_pkl = "archive/data.pkl" ascii
$rebuild_storage = "_rebuild_tensor_v2" ascii
$torch_storage = "torch.storage" ascii
$large_int_pattern = /\d{12,}L/ ascii
condition:
$zip_magic at 0 and
($archive_data or $archive_data_pkl) and
($rebuild_storage or $torch_storage) and
$large_int_pattern
}
Detection & Mitigation
Detection Guidance
File-level monitoring: Deploy YARA Rules 1 and 4 (Section 9) against all PyTorch checkpoint files entering the organisation. Scan model repositories, download caches, and shared storage for indicators of malicious checkpoint files. Rule 1 detects known code execution payload strings and non-standard global references; Rule 4 detects anomalous storage size declarations (GHSA-63cw-57p8-fm3p).
Process-level monitoring:
Deploy Sigma Rules 2 and 3 (Section 9) as hunting telemetry. Rule 2 monitors for suspicious child process spawning from Python processes associated with model loading (post-exploitation hunting). Rule 3 monitors for checkpoint files accessed from untrusted locations such as /tmp/, download directories, and Hugging Face caches (delivery-phase hunting). Both rules can describe benign ML workflow activity and require analyst triage.
Correlation strategy: Alert on co-occurrence of Rule 1 (malicious file detection) and Rule 2 (suspicious process spawn) for highest confidence detection. Rule 3 (untrusted file access) and Rule 4 (storage size anomaly) serve as correlation enrichment.
Mitigation
Immediate (within 48 hours):
- Upgrade PyTorch to version 2.10.0 or later across all environments that load model checkpoints (GHSA-63cw-57p8-fm3p).
- Audit model loading workflows — identify all code paths that call
torch.load()with or withoutweights_only=True.
Short-term (within 2 weeks):
- Review model provenance — verify the integrity and origin of all
.pthcheckpoint files in use, particularly those sourced from public repositories. - Implement model loading isolation — execute
torch.load()in sandboxed environments (containers, restricted user contexts) to limit blast radius. - Deploy detection rules — implement the YARA and Sigma rules from Section 9 in file scanning and endpoint detection pipelines.
Ongoing:
- Monitor for public PoC release — weaponised exploit development has been claimed by independent researchers but not verified; a public PoC would significantly escalate the threat.
- Review
RAXE-2026-015(PickleScan Bypass) — organisations using PickleScan as a compensating control should note that both the scanner and the framework safety mechanism are assessed as broken (RAXE assessment).
Indicators of Compromise
| Type | Indicator | Confidence | Source |
|---|---|---|---|
| Behavioural | Python process loading .pth file spawns shell (/bin/sh, /bin/bash) |
Low-Medium | Detection Rule 2 |
| Behavioural | Python process loading .pth file spawns download utility (curl, wget) |
Medium | Detection Rule 2 |
| Behavioural | Python process loading .pth file spawns network utility (nc, ncat) |
Medium | Detection Rule 2 |
| File | .pth file containing os.system, subprocess, builtins.eval, builtins.exec, or __import__ strings |
Medium | Detection Rule 1 |
| File | .pth file with STACK_GLOBAL opcode (0x93) but no standard torch._utils references |
Medium | Detection Rule 1 |
| File | .pth file with anomalously large storage size declarations (12+ digit integers) |
Low-Medium | Detection Rule 4 |
| Process | torch.load() invocation on files from /tmp/, /Downloads/, or .cache/huggingface/ |
Low-Medium | Detection Rule 3 |
Note: These indicators are based on advisory-described attack mechanisms and general deserialisation detection patterns. No publicly available exploit samples were identified for IOC extraction at time of writing.
Strategic Context
The Collapsing Trust Model for ML Model Loading
This vulnerability represents a systemic failure in the trust model for loading machine learning models. The weights_only=True parameter was PyTorch's recommended defence against deserialisation attacks — a control explicitly designed to allow safe loading of untrusted model files (GHSA-63cw-57p8-fm3p). Its compromise does not simply add one more vulnerability to track; it invalidates the security architecture pattern recommended in PyTorch's own documentation.
Compounding with RAXE-2026-015 (PickleScan Bypass)
RAXE-2026-015 demonstrated that PickleScan, the principal third-party tool for detecting malicious serialised ML files, can be bypassed. Combined with this finding, two key defensive layers for PyTorch model loading — the framework's built-in safety mechanism and the external scanning tool — are both assessed as broken (RAXE assessment). Organisations that implemented a defence-in-depth strategy using both controls now face a gap at multiple layers simultaneously.
Supply Chain Implications
The ML model supply chain relies heavily on shared pre-trained models distributed through repositories such as Hugging Face Hub. The attack vector for CVE-2026-24747 is network-based with low complexity (NVD: AV:N/AC:L) — an attacker need only publish a malicious checkpoint to a public repository and wait for a victim to download and load it. This aligns with MITRE ATLAS technique AML.T0010 (AI Supply Chain Compromise).
Weaponisation Timeline
The gap between CVE publication (2026-01-27, NVD) and reported weaponisation activity (2026-03-03 to 2026-03-04 [1][5]) was approximately 5 weeks. This is consistent with the typical timeline for sophisticated vulnerability exploitation development, and suggests that threat actors with deserialisation expertise may be able to weaponise such flaws within one to two months of public disclosure (RAXE assessment).
Regulatory and Compliance Outlook
Organisations subject to supply chain security requirements (NIST SP 800-161, EU AI Act Article 15) should assess whether their current model loading workflows meet the requisite supply chain integrity controls, given that PyTorch's recommended safety mechanism (weights_only=True) is now compromised (GHSA-63cw-57p8-fm3p).
References
- @N3mes1s, "And we have a fully weaponized
CVE-2026-24747," Twitter/X, 2026-03-04. https://x.com/N3mes1s/status/2029087991071711357 - NVD, "
CVE-2026-24747: PyTorch weights_only unpickler memory corruption," NIST, 2026-01-27. https://nvd.nist.gov/vuln/detail/CVE-2026-24747 - PyTorch Security Advisory, "GHSA-63cw-57p8-fm3p: PyTorch weights_only unpickler RCE," GitHub, 2026-01-26. https://github.com/pytorch/pytorch/security/advisories/GHSA-63cw-57p8-fm3p
- PyTorch, "Release v2.10.0," GitHub, 2026. https://github.com/pytorch/pytorch/releases/tag/v2.10.0
- @evilsocket, "AI ( Pruva ) is very close to fully weaponize
CVE-2026-24747," Twitter/X, 2026-03-03. https://x.com/evilsocket/status/2028859816563618214
RAXE Labs — AI Threat Intelligence
Classification: TLP:GREEN — IP Category B
Finding: RAXE-2026-019 | CVE-2026-24747 | GHSA-63cw-57p8-fm3p