7.8
CVE-2023-52438
- EPSS 0.01%
 - Veröffentlicht 20.02.2024 21:15:08
 - Zuletzt bearbeitet 21.11.2024 08:39:45
 - Quelle 416baaa9-dc9f-4396-8d5f-8c081f
 - CVE-Watchlists
 - Unerledigt
 
In the Linux kernel, the following vulnerability has been resolved:
binder: fix use-after-free in shinker's callback
The mmap read lock is used during the shrinker's callback, which means
that using alloc->vma pointer isn't safe as it can race with munmap().
As of commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in
munmap") the mmap lock is downgraded after the vma has been isolated.
I was able to reproduce this issue by manually adding some delays and
triggering page reclaiming through the shrinker's debug sysfs. The
following KASAN report confirms the UAF:
  ==================================================================
  BUG: KASAN: slab-use-after-free in zap_page_range_single+0x470/0x4b8
  Read of size 8 at addr ffff356ed50e50f0 by task bash/478
  CPU: 1 PID: 478 Comm: bash Not tainted 6.6.0-rc5-00055-g1c8b86a3799f-dirty #70
  Hardware name: linux,dummy-virt (DT)
  Call trace:
   zap_page_range_single+0x470/0x4b8
   binder_alloc_free_page+0x608/0xadc
   __list_lru_walk_one+0x130/0x3b0
   list_lru_walk_node+0xc4/0x22c
   binder_shrink_scan+0x108/0x1dc
   shrinker_debugfs_scan_write+0x2b4/0x500
   full_proxy_write+0xd4/0x140
   vfs_write+0x1ac/0x758
   ksys_write+0xf0/0x1dc
   __arm64_sys_write+0x6c/0x9c
  Allocated by task 492:
   kmem_cache_alloc+0x130/0x368
   vm_area_alloc+0x2c/0x190
   mmap_region+0x258/0x18bc
   do_mmap+0x694/0xa60
   vm_mmap_pgoff+0x170/0x29c
   ksys_mmap_pgoff+0x290/0x3a0
   __arm64_sys_mmap+0xcc/0x144
  Freed by task 491:
   kmem_cache_free+0x17c/0x3c8
   vm_area_free_rcu_cb+0x74/0x98
   rcu_core+0xa38/0x26d4
   rcu_core_si+0x10/0x1c
   __do_softirq+0x2fc/0xd24
  Last potentially related work creation:
   __call_rcu_common.constprop.0+0x6c/0xba0
   call_rcu+0x10/0x1c
   vm_area_free+0x18/0x24
   remove_vma+0xe4/0x118
   do_vmi_align_munmap.isra.0+0x718/0xb5c
   do_vmi_munmap+0xdc/0x1fc
   __vm_munmap+0x10c/0x278
   __arm64_sys_munmap+0x58/0x7c
Fix this issue by performing instead a vma_lookup() which will fail to
find the vma that was isolated before the mmap lock downgrade. Note that
this option has better performance than upgrading to a mmap write lock
which would increase contention. Plus, mmap_write_trylock() has been
recently removed anyway.Daten sind bereitgestellt durch National Vulnerability Database (NVD)
	Linux ≫ Linux Kernel  Version >= 4.20.0 < 5.4.268
	Linux ≫ Linux Kernel  Version >= 5.5.0 < 5.10.209
	Linux ≫ Linux Kernel  Version >= 5.11.0 < 5.15.148
	Linux ≫ Linux Kernel  Version >= 5.16.0 < 6.1.74
	Linux ≫ Linux Kernel  Version >= 6.2.0 < 6.6.13
	Linux ≫ Linux Kernel  Version >= 6.7.0 < 6.7.1
| Typ | Quelle | Score | Percentile | 
|---|---|---|---|
| EPSS | FIRST.org | 0.01% | 0.016 | 
| Quelle | Base Score | Exploit Score | Impact Score | Vector String | 
|---|---|---|---|---|
| nvd@nist.gov | 7.8 | 1.8 | 5.9 | 
                 CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H  
             | 
        
CWE-416 Use After Free
The product reuses or references memory after it has been freed. At some point afterward, the memory may be allocated again and saved in another pointer, while the original pointer references a location somewhere within the new allocation. Any operations using the original pointer are no longer valid because the memory "belongs" to the code that operates on the new pointer.