{"document":{"aggregate_severity":{"namespace":"https://www.suse.com/support/security/rating/","text":"not set"},"category":"csaf_vex","csaf_version":"2.0","distribution":{"text":"Copyright 2024 SUSE LLC. All rights reserved.","tlp":{"label":"WHITE","url":"https://www.first.org/tlp/"}},"lang":"en","notes":[{"category":"summary","text":"SUSE CVE-2026-23157","title":"Title"},{"category":"description","text":"In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: do not strictly require dirty metadata threshold for metadata writepages\n\n[BUG]\nThere is an internal report that over 1000 processes are\nwaiting at the io_schedule_timeout() of balance_dirty_pages(), causing\na system hang and trigger a kernel coredump.\n\nThe kernel is v6.4 kernel based, but the root problem still applies to\nany upstream kernel before v6.18.\n\n[CAUSE]\nFrom Jan Kara for his wisdom on the dirty page balance behavior first.\n\n  This cgroup dirty limit was what was actually playing the role here\n  because the cgroup had only a small amount of memory and so the dirty\n  limit for it was something like 16MB.\n\n  Dirty throttling is responsible for enforcing that nobody can dirty\n  (significantly) more dirty memory than there's dirty limit. Thus when\n  a task is dirtying pages it periodically enters into balance_dirty_pages()\n  and we let it sleep there to slow down the dirtying.\n\n  When the system is over dirty limit already (either globally or within\n  a cgroup of the running task), we will not let the task exit from\n  balance_dirty_pages() until the number of dirty pages drops below the\n  limit.\n\n  So in this particular case, as I already mentioned, there was a cgroup\n  with relatively small amount of memory and as a result with dirty limit\n  set at 16MB. A task from that cgroup has dirtied about 28MB worth of\n  pages in btrfs btree inode and these were practically the only dirty\n  pages in that cgroup.\n\nSo that means the only way to reduce the dirty pages of that cgroup is\nto writeback the dirty pages of btrfs btree inode, and only after that\nthose processes can exit balance_dirty_pages().\n\nNow back to the btrfs part, btree_writepages() is responsible for\nwriting back dirty btree inode pages.\n\nThe problem here is, there is a btrfs internal threshold that if the\nbtree inode's dirty bytes are below the 32M threshold, it will not\ndo any writeback.\n\nThis behavior is to batch as much metadata as possible so we won't write\nback those tree blocks and then later re-COW them again for another\nmodification.\n\nThis internal 32MiB is higher than the existing dirty page size (28MiB),\nmeaning no writeback will happen, causing a deadlock between btrfs and\ncgroup:\n\n- Btrfs doesn't want to write back btree inode until more dirty pages\n\n- Cgroup/MM doesn't want more dirty pages for btrfs btree inode\n  Thus any process touching that btree inode is put into sleep until\n  the number of dirty pages is reduced.\n\nThanks Jan Kara a lot for the analysis of the root cause.\n\n[ENHANCEMENT]\nSince kernel commit b55102826d7d (\"btrfs: set AS_KERNEL_FILE on the\nbtree_inode\"), btrfs btree inode pages will only be charged to the root\ncgroup which should have a much larger limit than btrfs' 32MiB\nthreshold.\nSo it should not affect newer kernels.\n\nBut for all current LTS kernels, they are all affected by this problem,\nand backporting the whole AS_KERNEL_FILE may not be a good idea.\n\nEven for newer kernels I still think it's a good idea to get\nrid of the internal threshold at btree_writepages(), since for most cases\ncgroup/MM has a better view of full system memory usage than btrfs' fixed\nthreshold.\n\nFor internal callers using btrfs_btree_balance_dirty() since that\nfunction is already doing internal threshold check, we don't need to\nbother them.\n\nBut for external callers of btree_writepages(), just respect their\nrequests and write back whatever they want, ignoring the internal\nbtrfs threshold to avoid such deadlock on btree inode dirty page\nbalancing.","title":"Description of the CVE"},{"category":"legal_disclaimer","text":"CSAF 2.0 data is provided by SUSE under the Creative Commons License 4.0 with Attribution (CC-BY-4.0).","title":"Terms of use"}],"publisher":{"category":"vendor","contact_details":"https://www.suse.com/support/security/contact/","name":"SUSE Product Security Team","namespace":"https://www.suse.com/"},"references":[{"category":"external","summary":"CVE-2026-23157","url":"https://www.suse.com/security/cve/CVE-2026-23157"},{"category":"external","summary":"SUSE Security Ratings","url":"https://www.suse.com/support/security/rating/"}],"title":"SUSE CVE CVE-2026-23157","tracking":{"current_release_date":"2026-02-16T00:26:02Z","generator":{"date":"2026-02-16T00:26:02Z","engine":{"name":"cve-database.git:bin/generate-csaf-vex.pl","version":"1"}},"id":"CVE-2026-23157","initial_release_date":"2026-02-16T00:26:02Z","revision_history":[{"date":"2026-02-16T00:26:02Z","number":"2","summary":"references added,severity changed from  to not set"}],"status":"interim","version":"2"}}}