| KERNEL_LOCK(9) | Kernel Developer's Manual | KERNEL_LOCK(9) | 
KERNEL_LOCK —
#include <sys/systm.h>
void
  
  KERNEL_LOCK(int
    nlocks, struct lwp
    *l);
void
  
  KERNEL_UNLOCK_ONE(struct
    lwp *l);
void
  
  KERNEL_UNLOCK_ALL(struct
    lwp *l, int
    *nlocksp);
void
  
  KERNEL_UNLOCK_LAST(struct
    lwp *l);
bool
  
  KERNEL_LOCKED_P();
KERNEL_LOCK facility serves to gradually transition
  software from the kernel's legacy uniprocessor execution model, where the
  kernel runs on only a single CPU and never in parallel on multiple CPUs, to a
  multiprocessor system.
New code should not use
    KERNEL_LOCK. KERNEL_LOCK is
    meant only for gradual transition of NetBSD to
    natively MP-safe code, which uses
    mutex(9) or other
    locking(9) facilities to
    synchronize between threads and interrupt handlers. Use of
    KERNEL_LOCK hurts system performance and
    responsiveness. This man page exists only to document the legacy API in
    order to make it easier to transition away from.
The kernel lock, sometimes also known as ‘giant lock’ or ‘big lock’, is a recursive exclusive spin-lock that can be held by a CPU at any interrupt priority level and is dropped while sleeping. This means:
This means holding the kernel lock for long periods of time,
        such as nontrivial computation, must be avoided. Under
        LOCKDEBUG kernels, holding the kernel lock for
        too long can lead to ‘spinout’ crashes.
Interrupt handlers that are not marked MP-safe are always run with the kernel lock. If the interrupt arrives on a CPU where the kernel lock is already held, it is simply taken again recursively on interrupt entry and released to its original recursion depth on interrupt exit.
This means, for instance, that although data structures
        accessed only under the kernel lock won't be changed before the sleep,
        they may be changed by another thread during the sleep. For example, the
        following program may crash on an assertion failure because the sleep in
        mutex_enter(9) can
        allow another CPU to run and change the global variable
        x:
	KERNEL_LOCK(1, NULL);
	x = 42;
	mutex_enter(...);
	...
	mutex_exit(...);
	KASSERT(x == 42);
	KERNEL_UNLOCK_ONE(NULL);
    
    This means simply introducing calls to
        mutex_enter(9) and
        mutex_exit(9) can
        break kernel-locked assumptions. Subsystems need to be consistently
        converted from KERNEL_LOCK and
        spl(9) to
        mutex(9),
        condvar(9), etc.; mixing
        mutex(9) and
        KERNEL_LOCK usually doesn't work.
Holding the kernel lock does not prevent other code from running on other CPUs at the same time. It only prevents other kernel-locked code from running on other CPUs at the same time.
KERNEL_LOCK(nlocks,
    l)If the kernel lock is already held by another CPU, spins until it can be acquired by this one. If the kernel lock is already held by this CPU, records the kernel lock recursion depth and returns immediately.
Most of the time nlocks is 1, but code
        that deliberately releases all of the kernel locks held by the current
        CPU in order to sleep and later reacquire the same number of kernel
        locks will pass a value of nlocks obtained from
        KERNEL_UNLOCK_ALL().
KERNEL_UNLOCK_ONE(l)KERNEL_UNLOCK(1,
      l, NULL);.KERNEL_UNLOCK_ALL(l,
    nlocksp)This is often used inside logic implementing sleep, around a call to mi_switch(9), so that the same number of recursive kernel locks can be reacquired afterward once the thread is reawoken:
	int nlocks;
	KERNEL_UNLOCK_ALL(l, &nlocks);
	... mi_switch(l) ...
	KERNEL_LOCK(nlocks, l);
    
    KERNEL_UNLOCK_LAST(l)This is normally used at the end of a non-MP-safe thread, which was known to have started with exactly one level of the kernel lock, and is now about to exit.
KERNEL_LOCKED_P()To be used only in diagnostic assertions with KASSERT(9).
The legacy argument l must be
    NULL or curlwp, which mean
    the same thing.
CALLOUT_MPSAFEFILTEROPS_MPSAFEKTHREAD_MPSAFEPCI_INTR_MPSAFESCSIPI_ADAPT_MPSAFESOFTINT_MPSAFEUSBD_MPSAFEUSB_TASKQ_MPSAFEVV_MPSAFEWQ_MPSAFEThe following NetBSD subsystems are still kernel-locked and need re-engineering to take advantage of parallelism on multiprocessor systems:
NET_MPSAFE is enabledAll interrupt handlers at IPL_VM, or lower
    (spl(9)) run with the kernel lock
    on most ports.
| February 13, 2022 | NetBSD 10.0 |