5.5
CVE-2024-53058
- EPSS 0.05%
- Veröffentlicht 19.11.2024 18:15:25
- Zuletzt bearbeitet 03.11.2025 23:17:16
- Quelle 416baaa9-dc9f-4396-8d5f-8c081f
- CVE-Watchlists
- Unerledigt
In the Linux kernel, the following vulnerability has been resolved:
net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data
In case the non-paged data of a SKB carries protocol header and protocol
payload to be transmitted on a certain platform that the DMA AXI address
width is configured to 40-bit/48-bit, or the size of the non-paged data
is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI
address width is configured to 32-bit, then this SKB requires at least
two DMA transmit descriptors to serve it.
For example, three descriptors are allocated to split one DMA buffer
mapped from one piece of non-paged data:
dma_desc[N + 0],
dma_desc[N + 1],
dma_desc[N + 2].
Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold
extra information to be reused in stmmac_tx_clean():
tx_q->tx_skbuff_dma[N + 0],
tx_q->tx_skbuff_dma[N + 1],
tx_q->tx_skbuff_dma[N + 2].
Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer
address returned by DMA mapping call. stmmac_tx_clean() will try to
unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf
is a valid buffer address.
The expected behavior that saves DMA buffer address of this non-paged
data to tx_q->tx_skbuff_dma[entry].buf is:
tx_q->tx_skbuff_dma[N + 0].buf = NULL;
tx_q->tx_skbuff_dma[N + 1].buf = NULL;
tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single();
Unfortunately, the current code misbehaves like this:
tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single();
tx_q->tx_skbuff_dma[N + 1].buf = NULL;
tx_q->tx_skbuff_dma[N + 2].buf = NULL;
On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the
DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address
obviously, then the DMA buffer will be unmapped immediately.
There may be a rare case that the DMA engine does not finish the
pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go
horribly wrong, DMA is going to access a unmapped/unreferenced memory
region, corrupted data will be transmited or iommu fault will be
triggered :(
In contrast, the for-loop that maps SKB fragments behaves perfectly
as expected, and that is how the driver should do for both non-paged
data and paged frags actually.
This patch corrects DMA map/unmap sequences by fixing the array index
for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.
Tested and verified on DWXGMAC CORE 3.20aDaten sind bereitgestellt durch National Vulnerability Database (NVD)
Linux ≫ Linux Kernel Version >= 4.7 < 5.15.171
Linux ≫ Linux Kernel Version >= 5.16 < 6.1.116
Linux ≫ Linux Kernel Version >= 6.2 < 6.6.60
Linux ≫ Linux Kernel Version >= 6.7 < 6.11.7
Linux ≫ Linux Kernel Version6.12 Updaterc1
Linux ≫ Linux Kernel Version6.12 Updaterc2
Linux ≫ Linux Kernel Version6.12 Updaterc3
Linux ≫ Linux Kernel Version6.12 Updaterc4
Linux ≫ Linux Kernel Version6.12 Updaterc5
| Typ | Quelle | Score | Percentile |
|---|---|---|---|
| EPSS | FIRST.org | 0.05% | 0.137 |
| Quelle | Base Score | Exploit Score | Impact Score | Vector String |
|---|---|---|---|---|
| nvd@nist.gov | 5.5 | 1.8 | 3.6 |
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
|
| 134c704f-9b21-4f2e-91b3-4a467353bcc0 | 5.5 | 1.8 | 3.6 |
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
|