Run one command, get a QEMU or gem5 Buildroot BusyBox virtual machine built from source with several minimal Linux kernel 4.17 module development example tutorials with GDB and KGDB step debugging and minimal educational hardware device models. "Tested" in x86, ARM and MIPS guests, Ubuntu 18.04 host.
- 1. Getting started
- 2. GDB step debug
- 3. KGDB
- 4. gdbserver
- 5. CPU architecture
- 6. init
- 7. KVM
- 8. X11
- 9. initrd
- 10. Linux kernel
- 10.1. Linux kernel configuration
- 10.2. Kernel version
- 10.3. Kernel module APIs
- 10.4. Kernel panic and oops
- 10.5. Pseudo filesystems
- 10.6. Pseudo files
- 10.7. kthread
- 10.8. Timers
- 10.9. IRQ
- 10.10. Kernel utility functions
- 10.11. Linux kernel tracing
- 10.12. Linux kernel hardening
- 10.13. User mode Linux
- 10.14. UIO
- 10.15. Linux kernel interactive stuff
- 10.16. DRM
- 10.17. Linux kernel testing
- 11. QEMU
- 12. gem5
- 12.1. gem5 getting started
- 12.2. gem5 vs QEMU
- 12.3. gem5 run benchmark
- 12.4. gem5 kernel command line parameters
- 12.5. gem5 GDB step debug
- 12.6. gem5 checkpoint
- 12.7. Pass extra options to gem5
- 12.8. gem5 exit after a number of instructions
- 12.9. m5ops
- 12.10. gem5 arm Linux kernel patches
- 12.11. m5term
- 12.12. gem5 stats
- 12.13. gem5 Python scripts without rebuild
- 12.14. gem5 fs_bigLITTLE
- 13. Buildroot
- 14. Benchmark this repo
- 15. Conversation
This is the best setup if you are on one of the supported systems: Ubuntu 16.04 or Ubuntu 18.04.
Everything will likely also work on other Linux distros if you install the analogous required packages for your distro from configure, but this is not currently well tested. Compatibility patches are welcome. You can also try Getting started with Docker if you are on other distros.
Reserve 12Gb of disk and run:
git clone https://github.com/cirosantilli/linux-kernel-module-cheat cd linux-kernel-module-cheat ./configure && ./build && ./run
It is also trivial to build for different supported CPU architectures.
The first configure will take a while (30 minutes to 2 hours) to clone and build, see Benchmark builds for more details.
If you don’t want to wait, you could also try the following faster but much more limited methods:
but you will soon find that they are simply not enough if you anywhere near serious about systems programming.
After QEMU opens up, you can start playing with the kernel modules:
insmod /hello.ko insmod /hello2.ko rmmod hello rmmod hello2
This should print to the screen:
hello init hello2 init hello cleanup hello2 cleanup
which are printk messages from init and cleanup methods of those modules.
Source:
Once you use GDB step debug and tmux, your terminal will look a bit like this:
[ 1.451857] input: AT Translated Set 2 keyboard as /devices/platform/i8042/s1│loading @0xffffffffc0000000: ../kernel_module-1.0//timer.ko
[ 1.454310] ledtrig-cpu: registered to indicate activity on CPUs │(gdb) b lkmc_timer_callback
[ 1.455621] usbcore: registered new interface driver usbhid │Breakpoint 1 at 0xffffffffc0000000: file /home/ciro/bak/git/linux-kernel-module
[ 1.455811] usbhid: USB HID core driver │-cheat/out/x86_64/buildroot/build/kernel_module-1.0/./timer.c, line 28.
[ 1.462044] NET: Registered protocol family 10 │(gdb) c
[ 1.467911] Segment Routing with IPv6 │Continuing.
[ 1.468407] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver │
[ 1.470859] NET: Registered protocol family 17 │Breakpoint 1, lkmc_timer_callback (data=0xffffffffc0002000 <mytimer>)
[ 1.472017] 9pnet: Installing 9P2000 support │ at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[ 1.475461] sched_clock: Marking stable (1473574872, 0)->(1554017593, -80442)│kernel_module-1.0/./timer.c:28
[ 1.479419] ALSA device list: │28 {
[ 1.479567] No soundcards found. │(gdb) c
[ 1.619187] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 │Continuing.
[ 1.622954] ata2.00: configured for MWDMA2 │
[ 1.644048] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ P5│Breakpoint 1, lkmc_timer_callback (data=0xffffffffc0002000 <mytimer>)
[ 1.741966] tsc: Refined TSC clocksource calibration: 2904.010 MHz │ at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[ 1.742796] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29dc0f4s│kernel_module-1.0/./timer.c:28
[ 1.743648] clocksource: Switched to clocksource tsc │28 {
[ 2.072945] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8043│(gdb) bt
[ 2.078641] EXT4-fs (vda): couldn't mount as ext3 due to feature incompatibis│#0 lkmc_timer_callback (data=0xffffffffc0002000 <mytimer>)
[ 2.080350] EXT4-fs (vda): mounting ext2 file system using the ext4 subsystem│ at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[ 2.088978] EXT4-fs (vda): mounted filesystem without journal. Opts: (null) │kernel_module-1.0/./timer.c:28
[ 2.089872] VFS: Mounted root (ext2 filesystem) readonly on device 254:0. │#1 0xffffffff810ab494 in call_timer_fn (timer=0xffffffffc0002000 <mytimer>,
[ 2.097168] devtmpfs: mounted │ fn=0xffffffffc0000000 <lkmc_timer_callback>) at kernel/time/timer.c:1326
[ 2.126472] Freeing unused kernel memory: 1264K │#2 0xffffffff810ab71f in expire_timers (head=<optimized out>,
[ 2.126706] Write protecting the kernel read-only data: 16384k │ base=<optimized out>) at kernel/time/timer.c:1363
[ 2.129388] Freeing unused kernel memory: 2024K │#3 __run_timers (base=<optimized out>) at kernel/time/timer.c:1666
[ 2.139370] Freeing unused kernel memory: 1284K │#4 run_timer_softirq (h=<optimized out>) at kernel/time/timer.c:1692
[ 2.246231] EXT4-fs (vda): warning: mounting unchecked fs, running e2fsck isd│#5 0xffffffff81a000cc in __do_softirq () at kernel/softirq.c:285
[ 2.259574] EXT4-fs (vda): re-mounted. Opts: block_validity,barrier,user_xatr│#6 0xffffffff810577cc in invoke_softirq () at kernel/softirq.c:365
hello S98 │#7 irq_exit () at kernel/softirq.c:405
│#8 0xffffffff818021ba in exiting_irq () at ./arch/x86/include/asm/apic.h:541
Apr 15 23:59:23 login[49]: root login on 'console' │#9 smp_apic_timer_interrupt (regs=<optimized out>)
hello /root/.profile │ at arch/x86/kernel/apic/apic.c:1052
# insmod /timer.ko │#10 0xffffffff8180190f in apic_timer_interrupt ()
[ 6.791945] timer: loading out-of-tree module taints kernel. │ at arch/x86/entry/entry_64.S:857
# [ 7.821621] 4294894248 │#11 0xffffffff82003df8 in init_thread_union ()
[ 8.851385] 4294894504 │#12 0x0000000000000000 in ?? ()
│(gdb)
All available modules can be found in the kernel_module directory.
We will try to support the following Ubuntu versions at least:
-
if the latest release is an LTS, support both the latest LTS and the previous one
-
otherwise, support both latest LTS and the latest non-LTS
This is a good option if you are on a Linux host, but the native build failed due to your weird host distribution.
Before anything, you must get rid of any host build files on out/ if you have any. A simple way to do this it to:
mv out out.host
A cleaner option is to make a separate clone of this repository just for Docker, although this will require another submodule update.
Then install Docker, e.g. on Ubuntu:
sudo apt-get install docker
The very first time you launch Docker, create the container with:
./rundocker setup
You are now left inside a shell in the Docker guest.
From there, run the exact same commands that you would on a native install: Getting started
The host git top level directory is mounted inside the guest, which means for example that you can use your host’s GUI text editor directly on the files.
Just don’t forget that if you nuke that directory on the guest, then it gets nuked on the host as well!
Trying to run the output from Docker from host won’t however, I think the main reason is that the absolute paths inside Docker are will be different than the host ones, but even if we fix that there will likely be other problems.
TODO make files created inside Docker be owned by the current user in host instead of root: https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes
Quit and stop the container:
Ctrl-D
Restart the container:
./rundocker
Open a second shell in a running container:
./rundocker sh
You will need this for example to use GDB step debug.
Start a second shell, and run a command in it at the same time:
./rundocker sh ./rungdb start_kernel
Docker stops if and only if you quit the initial shell, you can quit this one without consequences.
If you mistakenly run ./rundocker twice, it opens two mirrored terminals. To quit one of them do https://stackoverflow.com/questions/19688314/how-do-you-attach-and-detach-from-dockers-process:
Ctrl-P Ctrl-Q
To use Graphic mode from Docker:
./run -Vx
and then on host:
sudo apt-get install vinagre ./vnc
Destroy the docker container:
./rundocker DELETE
Since we mount the guest’s working directory on the host git top-level, you will likely not lose data from doing this, just the apt-get installs.
To get back to a host build, don’t forget to clean up out/ again:
mv out out.docker mv out.host out
After this, to start using Docker again will you need another:
./rundocker setup
We don’t currently provide a full prebuilt because it would be too big to host freely, notably because of the cross toolchain.
However, we do provide a prebuilt filesystem and kernel, which allows you to quickly try out running our kernel modules:
-
Download QEMU and this repo:
sudo apt-get install qemu-system-x86 git clone https://github.com/cirosantilli/linux-kernel-module-cheat cd linux-kernel-module-cheat
-
go to the latest release https://github.com/cirosantilli/linux-kernel-module-cheat/releases, download the
images-*.zipfile and extract it into the repository:unzip images-*.zip
It is not possible to automate this step without the API, and I’m not venturing there at this time, pull requests welcome.
-
checkout to the prebuilt repo version so that the scripts and documentation will be compatible with it, and run with the
-Poption:git checkout <release-sha> ./run -P
Limitations of this method:
-
can’t GDB step debug the kernel, since the source and cross toolchain with GDB are not available. Buildroot cannot easily use a host toolchain: Buildroot use prebuilt host toolchain.
Maybe we could work around this by just downloading the kernel source somehow, and using a host prebuilt GDB, but we felt that it would be too messy and unreliable.
-
can’t create new modules or modify the existing ones, since no cross toolchain
-
can’t use things that rely on our QEMU fork, e.g. in-fork Device models or Tracing
-
you won’t get the latest version of this repository. Our Travis attempt to automate builds failed, and storing a release for every commit would likely make GitHub mad at us.
-
gem5 is not currently supported, although it should not be too hard to do. One annoyance is that there is no Debian package for it, so you have to compile your own, so you might as well just build the image itself.
This method runs the kernel modules directly on your host computer without a VM, and saves you the compilation time and disk usage of the virtual machine method.
It has however severe limitations, and you will soon see that the compilation time and disk usage are well worth it:
-
can’t control which kernel version and build options to use. So some of the modules will likely not compile because of kernel API changes, since the Linux kernel does not have a stable kernel module API.
-
bugs can easily break you system. E.g.:
-
segfaults can trivially lead to a kernel crash, and require a reboot
-
your disk could get erased. Yes, this can also happen with
sudofrom userland. But you should not usesudowhen developing newbie programs. And for the kernel you don’t have the choice not to usesudo. -
even more subtle system corruption such as not being able to rmmod
-
-
can’t control which hardware is used, notably the CPU architecture
-
can’t step debug it with GDB easily. The alternatives are JTAG or KGDB, but those are less reliable, and JTAG requires extra hardware.
Still interested?
cd kernel_module ./make-host.sh
If the compilation of any of the C files fails because of kernel or toolchain differences that we don’t control on the host, just rename it to remove the .c extension and try again:
mv broken.c broken.c~ ./build_host
Once you manage to compile, and have come to terms with the fact that this may blow up your host, try it out with:
sudo insmod hello.ko # Our module is there. sudo lsmod | grep hello # Last message should be: hello init dmest -T sudo rmmod hello # Last message should be: hello exit dmesg -T # Not present anymore sudo lsmod | grep hello
Once you are done with this method, you must clean up the in-tree build objects before you decide to do the right thing and move on to the superior ./build Buildroot method:
cd "kernel_module" ./make-host.sh clean
otherwise they will cause problems.
Minimal host build system example:
cd hello_host make insmod hello.ko dmesg rmmod hello.ko dmesg
By default, we show the serial console directly on the current terminal, without opening a QEMU window.
Quit QEMU immediately:
Ctrl-A X
Alternative methods:
-
quitcommand on the QEMU monitor -
pkill qemu
TODO: if you hit Ctrl-C several times while arm or aarch64 are booting, after boot the userland shell does not show any updates when you type, this seems to be a bug on the Linux kernel v4.16: http://lists.nongnu.org/archive/html/qemu-discuss/2018-04/msg00027.html
Enable graphic mode:
./run -x
Text mode is the default due to the following considerable advantages:
-
copy and paste commands and stdout output to / from host
-
get full panic traces when you start making the kernel crash :-) See also: https://unix.stackexchange.com/questions/208260/how-to-scroll-up-after-a-kernel-panic
-
have a large scroll buffer, and be able to search it, e.g. by using tmux on host
-
one less window floating around to think about in addition to your shell :-)
-
graphics mode has only been properly tested on
x86_64.
Text mode has the following limitations over graphics mode:
-
you can’t see graphics such as those produced by X11
-
very early kernel messages such as
early console in extract_kernelonly show on the GUI, since at such early stages, not even the serial has been setup.
x86_64 has a VGA device enabled by default, as can be seen as:
./qemumonitor info qtree
and the Linux kernel picks it up through the fbdev graphics system as can be seen from:
cat /dev/urandom > /dev/fb0
flooding the screen with colors. See also: https://superuser.com/questions/223094/how-do-i-know-if-i-have-kms-enabled
TODO: on arm, we see the penguin and some boot messages, but don’t get a shell at then end:
./run -a aarch64 -x
I think it does not work because the graphic window is DRM only, i.e.:
cat /dev/urandom > /dev/fb0
fails with:
cat: write error: No space left on device
and has no effect, and the Linux kernel does not appear to have a built-in DRM console as it does for fbdev with fbcon.
There is however one out-of-tree implementation: kmscon.
arm and aarch64 rely on the QEMU CLI option:
-device virtio-gpu-pci
and the kernel config options:
CONFIG_DRM=y CONFIG_DRM_VIRTIO_GPU=y
Unlike x86, arm and aarch64 don’t have a display device attached by default, thus the need for virtio-gpu-pci.
See also https://wiki.qemu.org/Documentation/Platforms/ARM (recently edited and corrected by yours truly… :-)).
TODO: how to use VGA on ARM? https://stackoverflow.com/questions/20811203/how-can-i-output-to-vga-through-qemu-arm Tried:
-device VGA
# We use virtio-gpu because the legacy VGA framebuffer is # very troublesome on aarch64, and virtio-gpu is the only # video device that doesn't implement it.
so maybe it is not possible?
TODO could not get it working on x86_64, only ARM.
More concretely:
git -C linux checkout gem5/v4.15 ./build -gl -aa -K linux/arch/arm/configs/gem5_defconfig -L gem5-v4.15 git -C linux checkout - ./run -aa -g -L gem5-v4.15
and then on another shell:
vinagre localhost:5900
The CONFIG_LOGO penguin only appears after several seconds, together with kernel messages of type:
[ 0.152755] [drm] found ARM HDLCD version r0p0 [ 0.152790] hdlcd 2b000000.hdlcd: bound virt-encoder (ops 0x80935f94) [ 0.152795] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 0.152799] [drm] No driver support for vblank timestamp query. [ 0.215179] Console: switching to colour frame buffer device 240x67 [ 0.230389] hdlcd 2b000000.hdlcd: fb0: frame buffer device [ 0.230509] [drm] Initialized hdlcd 1.0.0 20151021 for 2b000000.hdlcd on minor 0
The port 5900 is incremented by one if you already have something running on that port, gem5 stdout tells us the right port on stdout as:
system.vncserver: Listening for connections on port 5900
and when we connect it shows a message:
info: VNC client attached
Alternatively, you can also view the frames with --frame-capture:
./run -aa -g -L gem5-v4.15 -- --frame-capture
This option dumps one compressed PNG whenever the screen image changes inside m5out, indexed by the cycle ID. This allows for more controlled experiments.
It is fun to see how we get one new frame whenever the white underscore cursor appears and reappears under the penguin.
TODO kmscube failed on aarch64 with:
kmscube[706]: unhandled level 2 translation fault (11) at 0x00000000, esr 0x92000006, in libgbm.so.1.0.0[7fbf6a6000+e000]
Tested on: 38fd6153d965ba20145f53dc1bb3ba34b336bde9
For aarch64 we also need -c kernel_config_fragment/display:
git -C linux checkout gem5/v4.15 ./build -gl -aA \ -c kernel_config_fragment/display \ -K linux/arch/arm64/configs/gem5_defconfig \ -L gem5-v4.15 \ ; git -C linux checkout - ./run -aA -gu -L gem5-v4.15
This is because the gem5 aarch64 defconfig does not enable HDLCD like the 32 bit one arm one for some reason.
We cannot use mainline Linux because the gem5 arm Linux kernel patches are required at least to provide the CONFIG_DRM_VIRT_ENCODER option.
gem5 emulates the HDLCD ARM Holdings hardware for arm and aarch64.
The kernel uses HDLCD to implement the DRM interface, the required kernel config options are present at: kernel_config_fragment/display.
TODO: minimize out the -K. If we just remove it on arm: it does not work with a failing dmesg:
[ 0.066208] [drm] found ARM HDLCD version r0p0 [ 0.066241] hdlcd 2b000000.hdlcd: bound virt-encoder (ops drm_vencoder_ops) [ 0.066247] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 0.066252] [drm] No driver support for vblank timestamp query. [ 0.066276] hdlcd 2b000000.hdlcd: Cannot do DMA to address 0x0000000000000000 [ 0.066281] swiotlb: coherent allocation failed for device 2b000000.hdlcd size=8294400 [ 0.066288] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.15.0 #1 [ 0.066293] Hardware name: V2P-AARCH64 (DT) [ 0.066296] Call trace: [ 0.066301] dump_backtrace+0x0/0x1b0 [ 0.066306] show_stack+0x24/0x30 [ 0.066311] dump_stack+0xb8/0xf0 [ 0.066316] swiotlb_alloc_coherent+0x17c/0x190 [ 0.066321] __dma_alloc+0x68/0x160 [ 0.066325] drm_gem_cma_create+0x98/0x120 [ 0.066330] drm_fbdev_cma_create+0x74/0x2e0 [ 0.066335] __drm_fb_helper_initial_config_and_unlock+0x1d8/0x3a0 [ 0.066341] drm_fb_helper_initial_config+0x4c/0x58 [ 0.066347] drm_fbdev_cma_init_with_funcs+0x98/0x148 [ 0.066352] drm_fbdev_cma_init+0x40/0x50 [ 0.066357] hdlcd_drm_bind+0x220/0x428 [ 0.066362] try_to_bring_up_master+0x21c/0x2b8 [ 0.066367] component_master_add_with_match+0xa8/0xf0 [ 0.066372] hdlcd_probe+0x60/0x78 [ 0.066377] platform_drv_probe+0x60/0xc8 [ 0.066382] driver_probe_device+0x30c/0x478 [ 0.066388] __driver_attach+0x10c/0x128 [ 0.066393] bus_for_each_dev+0x70/0xb0 [ 0.066398] driver_attach+0x30/0x40 [ 0.066402] bus_add_driver+0x1d0/0x298 [ 0.066408] driver_register+0x68/0x100 [ 0.066413] __platform_driver_register+0x54/0x60 [ 0.066418] hdlcd_platform_driver_init+0x20/0x28 [ 0.066424] do_one_initcall+0x44/0x130 [ 0.066428] kernel_init_freeable+0x13c/0x1d8 [ 0.066433] kernel_init+0x18/0x108 [ 0.066438] ret_from_fork+0x10/0x1c [ 0.066444] hdlcd 2b000000.hdlcd: Failed to set initial hw configuration. [ 0.066470] hdlcd 2b000000.hdlcd: master bind failed: -12 [ 0.066477] hdlcd: probe of 2b000000.hdlcd failed with error -12 [
So what other options are missing from gem5_defconfig? It would be cool to minimize it out to better understand the options.
When debugging a module, it becomes tedious to wait for build and re-type:
/modulename.sh
every time.
To automate that, use the methods described at: init
We use printk a lot, and it shows on the terminal by default, along with stdout and what you type.
Hide all printk messages:
dmesg -n 1
or equivalently:
echo 1 > /proc/sys/kernel/printk
See also: https://superuser.com/questions/351387/how-to-stop-kernel-messages-from-flooding-my-console
Do it with a Kernel command line parameters to affect the boot itself:
./run -e 'loglevel=5'
and now only boot warning messages or worse show, which is useful to identify problems.
Our default printk format is:
<LEVEL>[TIMESTAMP] MESSAGE
e.g.:
<6>[ 2.979121] Freeing unused kernel memory: 2024K
where:
-
LEVEL: higher means less serious -
TIMESTAMP: seconds since boot
This format is selected by the following boot options:
-
console_msg_format=syslog: add the<LEVEL>part. Added in v4.16. -
printk.time=y: add the[TIMESTAMP]part
Scroll up in Graphic mode:
Shift-PgUp
but I never managed to increase that buffer:
The superior alternative is to use text mode and GNU screen or tmux.
Debug messages are not printable by default without recompiling.
But the awesome CONFIG_DYNAMIC_DEBUG=y option which we enable by default allows us to do:
echo 8 > /proc/sys/kernel/printk echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control /myinsmod.out /hello.ko
and we have a shortcut at:
/pr_debug.sh
Source: rootfs_overlay/pr_debug.sh.
Wildcards are also accepted, e.g. enable all messages from all files:
echo 'file * +p' > /sys/kernel/debug/dynamic_debug/control
TODO: why is this not working:
echo 'func sys_init_module +p' > /sys/kernel/debug/dynamic_debug/control
Enable messages in specific modules:
echo 8 > /proc/sys/kernel/printk echo 'module myprintk +p' > /sys/kernel/debug/dynamic_debug/control insmod /myprintk.ko
Source: kernel_module/myprintk.c
This outputs the pr_debug message:
printk debug
but TODO: it also shows debug messages even without enabling them explicitly:
echo 8 > /proc/sys/kernel/printk insmod /myprintk.ko
and it shows as enabled:
# grep myprintk /sys/kernel/debug/dynamic_debug/control /linux-kernel-module-cheat/out/x86_64/buildroot/build/kernel_module-1.0/./myprintk.c:12 [myprintk]myinit =p "pr_debug\012"
Enable pr_debug for boot messages as well, before we can reach userland and write to /proc:
./run -e 'dyndbg="file * +p" loglevel=8'
Get ready for the noisiest boot ever, I think it overflows the printk buffer and funny things happen.
When CONFIG_DYNAMIC_DEBUG is set, printk(KERN_DEBUG is not the exact same as pr_debug( since printk(KERN_DEBUG messages are visible with:
./run -e 'initcall_debug logleve=8'
which outputs lines of type:
<7>[ 1.756680] calling clk_disable_unused+0x0/0x130 @ 1 <7>[ 1.757003] initcall clk_disable_unused+0x0/0x130 returned 0 after 111 usecs
which are printk(KERN_DEBUG inside init/main.c in v4.16.
Mentioned at: https://stackoverflow.com/questions/37272109/how-to-get-details-of-all-modules-drivers-got-initialized-probed-during-kernel-b
This likely comes from the ifdef split at init/main.c:
/* If you are writing a driver, please use dev_dbg instead */
#if defined(CONFIG_DYNAMIC_DEBUG)
#include <linux/dynamic_debug.h>
/* dynamic_pr_debug() uses pr_fmt() internally so we don't need it here */
#define pr_debug(fmt, ...) \
dynamic_pr_debug(fmt, ##__VA_ARGS__)
#elif defined(DEBUG)
#define pr_debug(fmt, ...) \
printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
#else
#define pr_debug(fmt, ...) \
no_printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
#endif
./run -e 'ignore_loglevel'
enables all log levels, and is basically the same as:
./run -e 'loglevel=8'
except that you don’t need to know what is the maximum level.
After making changes to a package, you must explicitly request it to be rebuilt.
For example, you you modify the kernel modules, you must rebuild with:
./build -k
which is just an alias for:
./build -- kernel_module-reconfigure
where kernel_module is the name of out Buildroot package that contains the kernel modules.
Other important targets are:
./build -l -q -g
which rebuild the Linux kernel, and QEMU and gem5 respectively. They are essentially aliases for:
./build -- linux-reconfigure host-qemu-reconfigure gem5-reconfigure
However, some of our aliases such as -l also have some magic convenience properties. So generally just use the aliases instead.
We don’t rebuild by default because, even with make incremental rebuilds, the timestamp check takes a few annoying seconds.
Not all packages have an alias, when they don’t, just use the form:
./build -- <pkg>-reconfigure
For example, if you decide to Enable compiler optimizations after an initial build is finished, you must first clean the build before rebuilding:
./build -B 'BR2_OPTIMIZE_3=y' kernel_module-dirclean kernel_module-reconfigure
as explained at: https://buildroot.org/downloads/manual/manual.html#rebuild-pkg
The clean is necessary because the source files didn’t change, so make would just check the timestamps and not build anything.
It gets annoying to retype -a aarch64 for every single command, or to remember ./build -B setups.
So simplify that, do:
cp cli.example data/cli
and then edit the data/cli file to your needs.
That file is used to pass extra command line arguments to most of our utilities.
Of course, you could get by with the shell history, or your own aliases, but we’ve felt that it was worth introducing a common mechanism for that.
You did something crazy, and nothing seems to work anymore?
All builds are stored under buildroot/,
The most coarse thing you can do is:
cd buildroot git checkout -- . git clean -xdf .
To only nuke one architecture, do:
rm -rf out/x86_64/buildroot
Only nuke one one package:
rm -rf out/x86_64/buildroot/build/host-qemu-custom ./build
This is sometimes necessary when changing the version of the submodules, and then builds fail. We should try to understand why and report bugs.
We disable filesystem persistency for both QEMU and gem5 by default, to prevent the emulator from putting the image in an unknown state.
For QEMU, this is done by passing the snapshot option to -drive, and for gem5 it is the default behaviour.
If you hack up our run script to remove that option, then:
./run -F 'date >f;poweroff'
followed by:
./run -F 'cat f'
gives the date, because poweroff without -n syncs before shutdown.
The sync command also saves the disk:
sync
When you do:
./build
the disk image gets overwritten by a fresh filesystem and you lose all changes.
Remember that if you forcibly turn QEMU off without sync or poweroff from inside the VM, e.g. by closing the QEMU window, disk changes may not be saved.
Persistency is also turned off when booting from initrd with a CPIO instead of with a disk.
Disk persistency is useful to re-run shell commands from the history of a previous session with Ctrl-R, but we felt that the loss of determinism was not worth it.
TODO how to make gem5 disk writes persistent?
As of cadb92f2df916dbb47f428fd1ec4932a2e1f0f48 there are some read_only entries in the config.ini under cow sections, but hacking them to true did not work:
diff --git a/configs/common/FSConfig.py b/configs/common/FSConfig.py
index 17498c42b..76b8b351d 100644
--- a/configs/common/FSConfig.py
+++ b/configs/common/FSConfig.py
@@ -60,7 +60,7 @@ os_types = { 'alpha' : [ 'linux' ],
}
class CowIdeDisk(IdeDisk):
- image = CowDiskImage(child=RawDiskImage(read_only=True),
+ image = CowDiskImage(child=RawDiskImage(read_only=False),
read_only=False)
def childImage(self, ci):
The directory of interest is src/dev/storage.
qcow2 does not appear supported, there are not hits in the source tree, and there is a mention on Nate’s 2009 wishlist: http://gem5.org/Nate%27s_Wish_List
Bootloaders can pass a string as input to the Linux kernel when it is booting to control its behaviour, much like the execve system call does to userland processes.
This allows us to control the behaviour of the kernel without rebuilding anything.
With QEMU, QEMU itself acts as the bootloader, and provides the -append option and we expose it through ./run -e, e.g.:
./run -e 'foo bar'
Then inside the host, you can check which options were given with:
cat /proc/cmdline
They are also printed at the beginning of the boot message:
dmesg | grep "Command line"
See also:
The arguments are documented in the kernel documentation: https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html
When dealing with real boards, extra command line options are provided on some magic bootloader configuration file, e.g.:
-
GRUB configuration files: https://askubuntu.com/questions/19486/how-do-i-add-a-kernel-boot-parameter
-
Raspberry pi
/boot/cmdline.txton a magic partition: https://raspberrypi.stackexchange.com/questions/14839/how-to-change-the-kernel-commandline-for-archlinuxarm-on-raspberry-pi-effectly
Double quotes can be used to escape spaces as in opt="a b", but double quotes themselves cannot be escaped, e.g. opt"a\"b"
This even lead us to use base64 encoding with -E!
There are two methods:
-
__setupas in:__setup("console=", console_setup); -
core_paramas in:core_param(panic, panic_timeout, int, 0644);
core_param suggests how they are different:
/** * core_param - define a historical core kernel parameter. ... * core_param is just like module_param(), but cannot be modular and * doesn't add a prefix (such as "printk."). This is for compatibility * with __setup(), and it makes sense as truly core parameters aren't * tied to the particular file they're in. */
Disable userland address space randomization. Test it out by running rand_check.out twice:
./run -F '/rand_check.out;/poweroff.out' ./run -F '/rand_check.out;/poweroff.out'
If we remove it from our run script by hacking it up, the addresses shown by rand_check.out vary across boots.
Equivalent to:
echo 0 > /proc/sys/kernel/randomize_va_space
If you are feeling fancy, you can also insert modules with:
modprobe hello
which insmods kernel_module/hello.c.
modprobe searches for modules under:
ls /lib/modules/*/extra/
Kernel modules built from the Linux mainline tree with CONFIG_SOME_MOD=m, are automatically available with modprobe, e.g.:
modprobe dummy-irq irq=1
If you are feeling raw, you can insert and remove modules with our own minimal module inserter and remover!
# init_module /myinsmod.out /hello.ko # finit_module /myinsmod.out /hello.ko "" 1 /myrmmod.out hello
which teaches you how it is done from C code.
Source:
The Linux kernel offers two system calls for module insertion:
-
init_module -
finit_module
and:
man init_module
documents that:
The finit_module() system call is like init_module(), but reads the module to be loaded from the file descriptor fd. It is useful when the authenticity of a kernel module can be determined from its location in the filesystem; in cases where that is possible, the overhead of using cryptographically signed modules to determine the authenticity of a module can be avoided. The param_values argument is as for init_module().
finit is newer and was added only in v3.8. More rationale: https://lwn.net/Articles/519010/
When doing long simulations sweeping across multiple system parameters, it becomes fundamental to do multiple simulations in parallel.
This is specially true for gem5, which runs much slower than QEMU, and cannot use multiple host cores to speed up the simulation: cirosantilli2/gem5-issues#15
This also has a good synergy with Build variants.
First shell:
./run
Another shell:
./run -n 1
The default run id is 0.
This method also allows us to keep run outputs in separate directories for later inspection, e.g.:
./run -aA -g -n 0 &>/dev/null & ./run -aA -g -n 1 &>/dev/null &
produces two separate m5out directories:
less out/aarch64/gem5/default/0/m5out less out/aarch64/gem5/default/1/m5out
and the gem5 host executable stdout and stderr can be found at:
less out/aarch64/gem5/default/0/termout.txt less out/aarch64/gem5/default/1/termout.txt
Each line is prepended with the timestamp in seconds since the start of the program when it appeared.
You can also add a prefix to the build ID before a period:
./run -aA -g -n some-experiment.1
which then uses the output directory:
less out/aarch64/gem5/default/some-experiment.1/m5out
and makes it easier to remember afterwards which directory contains what.
However this still takes up the same ports as:
./run -aA -g -n 1
so you cannot run both at the same time.
Like CPU architecture, you will need to pass the -n option to anything that needs to know runtime information, e.g. GDB step debug:
./run -n 1 ./rungdb -n 1
To run multiple gem5 checkouts, see: gem5 simultaneous runs with build variants.
Implementation note: we create multiple namespaces for two things:
-
run output directory
-
ports
-
QEMU allows setting all ports explicitly.
If a port is not free, it just crashes.
We assign a contiguous port range for each run ID.
-
gem5 automatically increments ports until it finds a free one.
gem5 60600f09c25255b3c8f72da7fb49100e2682093a does not seem to expose a way to set the terminal and VNC ports from
fs.py, so we just let gem5 assign the ports itself, and use-nonly to match what it assigned. Those ports both appear onconfig.ini.The GDB port can be assigned on
gem5.opt --remote-gdb-port, but it does not appear onconfig.ini.
-
-d makes QEMU wait for a GDB connection, otherwise we could accidentally go past the point we want to break at:
./run -d
Say you want to break at start_kernel. So on another shell:
./rungdb start_kernel
or at a given line:
./rungdb init/main.c:1088
Now QEMU will stop there, and you can use the normal GDB commands:
l n c
See also:
O=0 is an impossible dream, O=2 being the default.
So get ready for some weird jumps, and <value optimized out> fun. Why, Linux, why.
Let’s observe the kernel as it reacts to some userland actions.
Start QEMU with just:
./run
and after boot inside a shell run:
/count.sh
which counts to infinity to stdout. Source: rootfs_overlay/count.sh.
Then in another shell, run:
./rungdb
and then hit:
Ctrl-C break __x64_sys_write continue continue continue
And you now control the counting on the first shell from GDB!
Before v4.17, the symbol name was just sys_write, the change happened at d5a00528b58cdb2c71206e18bd021e34c4eab878. aarch64 still uses just sys_write.
When you hit Ctrl-C, if we happen to be inside kernel code at that point, which is very likely if there are no heavy background tasks waiting, and we are just waiting on a sleep type system call of the command prompt, we can already see the source for the random place inside the kernel where we stopped.
tmux just makes things even more fun by allowing us to see both terminals at once without dragging windows around!
First start tmux with:
tmux
Now that you are inside a shell inside tmux, run:
./run -du
Gives splits the terminal into two panes:
-
left: usual QEMU
-
right: gdb
and focuses on the GDB pane.
Now you can navigate with the usual tmux shortcuts:
-
switch between the two panes with:
Ctrl-B O -
close either pane by killing its terminal with
Ctrl-Das usual
To start again, switch back to the QEMU pane, kill the emulator, and re-run:
./run -du
This automatically clears the GDB pane, and starts a new one.
Pass extra GDB arguments with:
./run -du -U start_kernel
See the tmux manual for further details:
man tmux
If you are using gem5 instead of QEMU, -u has a different effect: it opens the gem5 terminal instead of the debugger:
./run -gu
If you also want to use the debugger with gem5, you will need to create new terminals as usual.
From inside tmux, you can do that with Ctrl-B C or Ctrl-B %.
To see the debugger by default instead of the terminal, run:
./tmu ./rungdb;./run -dg
Loadable kernel modules are a bit trickier since the kernel can place them at different memory locations depending on load order.
So we cannot set the breakpoints before insmod.
However, the Linux kernel GDB scripts offer the lx-symbols command, which takes care of that beautifully for us.
Shell 1:
./run
Wait for the boot to end and run:
insmod /timer.ko
Source: kernel_module/timer.c.
This prints a message to dmesg every second.
Shell 2:
./rungdb
In GDB, hit Ctrl-C, and note how it says:
scanning for modules in /linux-kernel-module-cheat//out/x86_64/buildroot/build/linux-custom loading @0xffffffffc0000000: ../kernel_module-1.0//timer.ko
That’s lx-symbols working! Now simply:
b lkmc_timer_callback c c c
and we now control the callback from GDB!
Just don’t forget to remove your breakpoints after rmmod, or they will point to stale memory locations.
TODO: why does break work_func for insmod kthread.ko not break the first time I insmod, but breaks the second time?
TODO on arm 51e31cdc2933a774c2a0dc62664ad8acec1d2dbe it does not always work, and lx-symbols fails with the message:
loading vmlinux
Traceback (most recent call last):
File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 163, in invoke
self.load_all_symbols()
File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 150, in load_all_symbols
[self.load_module_symbols(module) for module in module_list]
File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 110, in load_module_symbols
module_name = module['name'].string()
gdb.MemoryError: Cannot access memory at address 0xbf0000cc
Error occurred in Python command: Cannot access memory at address 0xbf0000cc
Can’t reproduce on x86_64 and aarch64 are fine.
It is kind of random: if you just insmod manually and then immediately ./rungdb -a arm, then it usually works.
But this fails most of the time: shell 1:
./run -a arm -F 'insmod /hello.ko'
shell 2:
./rungdb -a arm
then hit Ctrl-C on shell 2, and voila.
Then:
cat /proc/modules
says that the load address is:
0xbf000000
so it is close to the failing 0xbf0000cc.
readelf:
./runtc readelf -s ./out/x86_64/buildroot/build/kernel_module-1.0/hello.ko
does not give any interesting hits at cc, no symbol was placed that far.
TODO find a more convenient method. We have working methods, but they are not ideal.
This is not very easy, since by the time the module finishes loading, and lx-symbols can work properly, module_init has already finished running!
Possibly asked at:
The kernel calls module_init synchronously, therefore it is not hard to step into that call.
As of 4.16, the call happens in do_one_initcall, so we can do in shell 1:
./run
shell 2 after boot finishes (because there are other calls to do_init_module at boot, presumably for the built-in modules):
./rungdb do_one_initcall
then step until the line:
833 ret = fn();
which does the actual call, and then step into it.
For the next time, you can also put a breakpoint there directly:
./rungdb init/main.c:833
How we found this out: first we got GDB module_init calculate entry address working, and then we did a bt. AKA cheating :-)
This works, but is a bit annoying.
The key observation is that the load address of kernel modules is deterministic: there is a pre allocated memory region https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt "module mapping space" filled from bottom up.
So once we find the address the first time, we can just reuse it afterwards, as long as we don’t modify the module.
Do a fresh boot and get the module:
./run -F '/pr_debug.sh;insmod /fops.ko;/poweroff.out'
The boot must be fresh, because the load address changes every time we insert, even after removing previous modules.
The base address shows on terminal:
0xffffffffc0000000 .text
Now let’s find the offset of myinit:
./runtc readelf \ -s ./out/x86_64/buildroot/build/kernel_module-1.0/fops.ko | \ grep myinit
which gives:
30: 0000000000000240 43 FUNC LOCAL DEFAULT 2 myinit
so the offset address is 0x240 and we deduce that the function will be placed at:
0xffffffffc0000000 + 0x240 = 0xffffffffc0000240
Now we can just do a fresh boot on shell 1:
./run -E 'insmod /fops.ko;/poweroff.out' -d
and on shell 2:
./rungdb '*0xffffffffc0000240'
GDB then breaks, and lx-symbols works.
TODO not working. This could be potentially very convenient.
The idea here is to break at a point late enough inside sys_init_module, at which point lx-symbols can be called and do its magic.
Beware that there are both sys_init_module and sys_finit_module syscalls, and insmod uses fmodule_init by default.
Both call do_module_init however, which is what lx-symbols hooks to.
If we try:
b sys_finit_module
then hitting:
n
does not break, and insertion happens, likely because of optimizations? Disable kernel compiler optimizations
Then we try:
b do_init_module
A naive:
fin
also fails to break!
Finally, in despair we notice that pr_debug prints the kernel load address as explained at Bypass lx-symbols.
So, if we set a breakpoint just after that message is printed by searching where that happens on the Linux source code, we must be able to get the correct load address before init_module happens.
This is another possibility: we could modify the module source by adding a trap instruction of some kind.
This appears to be described at: https://www.linuxjournal.com/article/4525
But it refers to a gdbstart script which is not in the tree anymore and beyond my git log capabilities.
And just adding:
asm( " int $3");
directly gives an oops as I’d expect.
Useless, but a good way to show how hardcore you are. Disable lx-symbols with:
./rungdb -L
From inside guest:
insmod /fops.ko cat /proc/modules
as mentioned at:
This will give a line of form:
fops 2327 0 - Live 0xfffffffa00000000
And then tell GDB where the module was loaded with:
Ctrl-C add-symbol-file ../kernel_module-1.0/fops.ko 0xfffffffa00000000
Alternatively, if the module panics before you can read /proc/modules, there is a pr_debug which shows the load address:
echo 8 > /proc/sys/kernel/printk echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control /myinsmod.out /hello.ko
And then search for a line of type:
[ 84.877482] 0xfffffffa00000000 .text
TODO: why can’t we break at early startup stuff such as:
./rungdb extract_kernel ./rungdb main
Maybe it is because they are being copied around at specific locations instead of being run directly from inside the main image, which is where the debug information points to?
gem5 tracing with --debug-flags=Exec does show the right symbols however! So in the worst case, we can just read their source. Amazing.
One possibility is to run:
./trace-boot -a arm
and then find the second address (the first one does not work, already too late maybe):
less ./out/arm/qemu/0/trace.txt
and break there:
./run -a arm -d ./rungdb -a arm '*0x1000'
but TODO: it does not show the source assembly under arch/arm: https://stackoverflow.com/questions/11423784/qemu-arm-linux-kernel-boot-debug-no-source-code
I also tried to hack rungdb with:
@@ -81,7 +81,7 @@ else
${gdb} \
-q \\
-ex 'add-auto-load-safe-path $(pwd)' \\
--ex 'file vmlinux' \\
+-ex 'file arch/arm/boot/compressed/vmlinux' \\
-ex 'target remote localhost:${port}' \\
${brk} \
-ex 'continue' \\
and no I do have the symbols from arch/arm/boot/compressed/vmlinux', but the breaks still don’t work.
QEMU’s -gdb GDB breakpoints are set on virtual addresses, so you can in theory debug userland processes as well.
You will generally want to use gdbserver for this as it is more reliable, but this method can overcome the following limitations of gdbserver:
-
the emulator does not support host to guest networking. This seems to be the case for gem5: gem5 host to guest networking
-
cannot see the start of the
initprocess easily -
gdbserveralters the working of the kernel, and makes your run less representative
Known limitations of direct userland debugging:
-
the kernel might switch context to another process or to the kernel itself e.g. on a system call, and then TODO confirm the PIC would go to weird places and source code would be missing.
-
TODO step into shared libraries. If I attempt to load them explicitly:
(gdb) sharedlibrary ../../staging/lib/libc.so.0 No loaded shared libraries match the pattern `../../staging/lib/libc.so.0'.
since GDB does not know that libc is loaded.
-
Shell 1:
./run -d -e 'init=/sleep_forever.out'
-
Shell 2:
./rungdb-user kernel_module-1.0/user/sleep_forever.out main
BusyBox custom init process:
-
Shell 1:
./run -d -e 'init=/bin/ls'
-
Shell 2:
./rungdb-user busybox-1.26.2/busybox ls_main
This follows BusyBox' convention of calling the main for each executable as <exec>_main since the busybox executable has many "mains".
BusyBox default init process:
-
Shell 1:
./run -d
-
Shell 2:
./rungdb-user busybox-1.26.2/busybox init_main
This cannot be debugged in another way without modifying the source, or /sbin/init exits early with:
"must be run as PID 1"
Non-init process:
-
Shell 1:
./run -d
-
Shell 2:
./rungdb-user kernel_module-1.0/user/myinsmod.out main
-
Shell 1 after the boot finishes:
/myinsmod.out /hello.ko
This is the least reliable setup as there might be other processes that use the given virtual address.
TODO: on QEMU bfba11afddae2f7b2c1335b4e23133e9cd3c9126, it works on x86_64 and aarch64 but fails on arm as follows:
-
Shell 1:
./run -a arm
-
Shell 2: wait for boot to finish, and run:
./rungdb-user -a arm kernel_module-1.0/user/hello.out main
-
Shell 1:
/hello.out
The problem is that the b main that we do inside ./rungdb-user says:
Cannot access memory at address 0x10604
We have also double checked the address with:
./runtc -a arm readelf \ ./out/arm/buildroot/build/kernel_module-1.0/user/hello.out | \ grep main
and from GDB:
info line main
and both give:
000105fc
which is just 8 bytes before 0x10604.
gdbserver also says 0x10604.
However, if do a Ctrl-C in GDB, and then a direct:
b *0x000105fc
it works. Why?!
On GEM5, x86 can also give the Cannot access memory at address, so maybe it is also unreliable on QEMU, and works just by coincidence.
GDB can call functions as explained at: https://stackoverflow.com/questions/1354731/how-to-evaluate-functions-in-gdb
However this is failing for us:
-
some symbols are not visible to
calleven thoughbsees them -
for those that are,
callfails with an E14 error
E.g.: if we break on __x64_sys_write on /count.sh:
>>> call printk(0, "asdf") Could not fetch register "orig_rax"; remote failure reply 'E14' >>> b printk Breakpoint 2 at 0xffffffff81091bca: file kernel/printk/printk.c, line 1824. >>> call fdget_pos(fd) No symbol "fdget_pos" in current context. >>> b fdget_pos Breakpoint 3 at 0xffffffff811615e3: fdget_pos. (9 locations) >>>
even though fdget_pos is the first thing __x64_sys_write does:
581 SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf,
582 size_t, count)
583 {
584 struct fd f = fdget_pos(fd);
I also noticed that I get the same error:
Could not fetch register "orig_rax"; remote failure reply 'E14'
when trying to use:
fin
on many (all?) functions.
See also: cirosantilli#19
We can set and get which cores the Linux kernel allows a program to run on with sched_getaffinity and sched_setaffinity:
./run -c2 -F '/sched_getaffinity.out'
Sample output:
sched_getaffinity = 1 1 sched_getcpu = 1 sched_getaffinity = 1 0 sched_getcpu = 0
Which shows us that:
-
initially:
-
all 2 cores were enabled as shown by
sched_getaffinity = 1 1 -
the process was randomly assigned to run on core 1 (the second one) as shown by
sched_getcpu = 1. If we run this several times, it will also run on core 0 sometimes.
-
-
then we restrict the affinity to just core 0, and we see that the program was actually moved to core 0
The number of cores is modified as explained at: Number of cores
taskset from the util-linux package sets the initial core affinity of a program:
taskset -c 1,1 /sched_getaffinity.out
output:
sched_getaffinity = 0 1 sched_getcpu = 1 sched_getaffinity = 1 0 sched_getcpu = 0
so we see that the affinity was restricted to the second core from the start.
Let’s do a QEMU observation to justify this example being in the repository with userland breakpoints.
We will run our /sched_getaffinity.out infinitely many time, on core 0 and core 1 alternatively:
./run -c2 -d -F 'i=0; while true; do taskset -c $i,$i /sched_getaffinity.out; i=$((! $i)); done'
on another shell:
./rungdb-user kernel_module-1.0/user/sched_getaffinity.out main
Then, inside GDB:
(gdb) info threads Id Target Id Frame * 1 Thread 1 (CPU#0 [running]) main () at sched_getaffinity.c:30 2 Thread 2 (CPU#1 [halted ]) native_safe_halt () at ./arch/x86/include/asm/irqflags.h:55 (gdb) c (gdb) info threads Id Target Id Frame 1 Thread 1 (CPU#0 [halted ]) native_safe_halt () at ./arch/x86/include/asm/irqflags.h:55 * 2 Thread 2 (CPU#1 [running]) main () at sched_getaffinity.c:30 (gdb) c
and we observe that info threads shows the actual correct core on which the process was restricted to run by taskset!
We should also try it out with kernel modules: https://stackoverflow.com/questions/28347876/set-cpu-affinity-on-a-loadable-linux-kernel-module
TODO we then tried:
./run -c2 -F '/sched_getaffinity_threads.out'
and:
./rungdb-user kernel_module-1.0/user/sched_getaffinity_threads.out
to switch between two simultaneous live threads with different affinities, it just didn’t break on our threads:
b main_thread_0
Bibliography:
We source the Linux kernel GDB scripts by default for lx-symbols, but they also contains some other goodies worth looking into.
Those scripts basically parse some in-kernel datastructures to offer greater visibility with GDB.
All defined commands are prefixed by lx-, so to get a full list just try to tab complete that.
There aren’t as many as I’d like, and the ones that do exist are pretty self explanatory, but let’s give a few examples.
Show dmesg:
lx-dmesg
Show the Kernel command line parameters:
lx-cmdline
Dump the device tree to a fdtdump.dtb file in the current directory:
lx-fdtdump pwd
List inserted kernel modules:
lx-lsmod
Sample output:
Address Module Size Used by 0xffffff80006d0000 hello 16384 0
Bibliography:
List all processes:
lx-ps
Sample output:
0xffff88000ed08000 1 init 0xffff88000ed08ac0 2 kthreadd
The second and third fields are obviously PID and process name.
The first one is more interesting, and contains the address of the task_struct in memory.
This can be confirmed with:
p ((struct task_struct)*0xffff88000ed08000
which contains the correct PID for all threads I’ve tried:
pid = 1,
TODO get the PC of the kthreads: https://stackoverflow.com/questions/26030910/find-program-counter-of-process-in-kernel Then we would be able to see where the threads are stopped in the code!
On ARM, I tried:
task_pt_regs((struct thread_info *)((struct task_struct)*0xffffffc00e8f8000))->uregs[ARM_pc]
but task_pt_regs is a #define and GDB cannot see defines without -ggdb3: https://stackoverflow.com/questions/2934006/how-do-i-print-a-defined-constant-in-gdb which are apparently not set?
Bibliography:
TODO: only working with Graphic mode. Without it, nothing shows on the terminal. So likely something linked to the option console=ttyS0.
KGDB is kernel dark magic that allows you to GDB the kernel on real hardware without any extra hardware support.
It is useless with QEMU since we already have full system visibility with -gdb, but this is a good way to learn it.
Cheaper than JTAG (free) and easier to setup (all you need is serial), but with less visibility as it depends on the kernel working, so e.g.: dies on panic, does not see boot sequence.
Usage:
./run -k ./rungdb -k
In GDB:
c
In QEMU:
/count.sh & /kgdb.sh
In GDB:
b __x64_sys_write c c c c
And now you can count from GDB!
If you do: b __x64_sys_write immediately after ./rungdb -k, it fails with KGDB: BP remove failed: <address>. I think this is because it would break too early on the boot sequence, and KGDB is not yet ready.
See also:
GDB not connecting to KGDB in ARM. Possibly linked to -serial stdio. See also: https://stackoverflow.com/questions/14155577/how-to-use-kgdb-on-arm
Main shell just falls on:
Entering kdb (current=0xf8ce07d3, pid 1) due to Keyboard Entry kdb>
and GDB shell gives:
Reading symbols from vmlinux...done. Remote debugging using localhost:1234 Ignoring packet error, continuing... warning: unrecognized item "timeout" in "qSupported" response Ignoring packet error, continuing... Remote replied unexpectedly to 'vMustReplyEmpty': timeout
In QEMU:
/kgdb-mod.sh
Source: rootfs_overlay/kgdb-mod.sh.
In GDB:
lx-symbols ../kernel_module-1.0/ b fop_write c c c
and you now control the count.
TODO: if I -ex lx-symbols to the gdb command, just like done for QEMU -gdb, the kernel oops. How to automate this step?
If you modify runqemu to use:
-append kgdboc=kbd
instead of kgdboc=ttyS0,115200, you enter a different debugging mode called KDB.
Usage: in QEMU:
[0]kdb> go
Boot finishes, then:
/kgdb.sh
Source: rootfs_overlay/kgdb-mod.sh.
And you are back in KDB. Now you can:
[0]kdb> help [0]kdb> bp __x64_sys_write [0]kdb> go
And you will break whenever __x64_sys_write is hit.
The other KDB commands allow you to instruction steps, view memory, registers and some higher level kernel runtime data.
But TODO I don’t think you can see where you are in the kernel source code and line step as from GDB, since the kernel source is not available on guest (ah, if only debugging information supported full source).
Step debug userland processes to understand how they are talking to the kernel.
Guest:
/gdbserver.sh /myinsmod.out /hello.ko
Source: rootfs_overlay/gdbserver.sh.
Host:
./rungdbserver kernel_module-1.0/user/myinsmod.out
You can find the executable with:
find out/x86_64/buildroot/build -name myinsmod.out
TODO: automate the path finding:
-
using the executable from under
out/x86_64/buildroot/targetwould be easier as the path is the same as in guest, but unfortunately those executables are stripped to make the guest smaller.BR2_STRIP_none=yshould disable stripping, but make the image way larger. -
outputx86_64~/staging/would be even better thantarget/as the docs say that this directory contains binaries before they were stripped. However, only a few binaries are pre-installed there by default, and it seems to be a manual per package thing.E.g.
pciutilshas forlspci:define PCIUTILS_INSTALL_STAGING_CMDS $(TARGET_MAKE_ENV) $(MAKE1) -C $(@D) $(PCIUTILS_MAKE_OPTS) \ PREFIX=$(STAGING_DIR)/usr SBINDIR=$(STAGING_DIR)/usr/bin \ install install-lib install-pcilib endefand the docs describe the
*_INSTALL_STAGINGper package config, which is normally set for shared library packages.Feature request: https://bugs.busybox.net/show_bug.cgi?id=10386
An implementation overview can be found at: https://reverseengineering.stackexchange.com/questions/8829/cross-debugging-for-mips-elf-with-qemu-toolchain/16214#16214
As usual, different archs work with:
./rungdbserver -a arm kernel_module-1.0/user/myinsmod.out
BusyBox executables are all symlinks, so if you do on guest:
/gdbserver.sh ls
on host you need:
./rungdbserver busybox-1.26.2/busybox
Our setup gives you the rare opportunity to step debug libc and other system libraries e.g. with:
b open c
Or simply by stepping into calls:
s
This is made possible by the GDB command:
set sysroot ${buildroot_out_dir}/staging
which automatically finds unstripped shared libraries on the host for us.
TODO: try to step debug the dynamic loader. Would be even easier if starti is available: https://stackoverflow.com/questions/10483544/stopping-at-the-first-machine-code-instruction-in-gdb
The portability of the kernel and toolchains is amazing: change an option and most things magically work on completely different hardware.
To use arm instead of x86 for example:
./build -a arm ./run -a arm
Debug:
./run -a arm -d # On another terminal. ./rungdb -a arm
We also have one letter shorthand names for the architectures:
# aarch64 ./run -a A # arm ./run -a a # mips64 ./run -a m # x86_64 ./run -a x
Known quirks of the supported architectures are documented in this section.
This example illustrates how reading from the x86 control registers with mov crX, rax can only be done from kernel land on ring0.
From kernel land:
insmod ring0.ko
works and output the registers, for example:
cr0 = 0xFFFF880080050033 cr2 = 0xFFFFFFFF006A0008 cr3 = 0xFFFFF0DCDC000
However if we try to do it from userland:
/ring0.out
stdout gives:
Segmentation fault
and dmesg outputs:
traps: ring0.out[55] general protection ip:40054c sp:7fffffffec20 error:0 in ring0.out[400000+1000]
Sources:
In both cases, we attempt to run the exact same code which is shared on the ring0.h header file.
Bibliography:
TODO Can you run arm executables in the aarch64 guest? https://stackoverflow.com/questions/22460589/armv8-running-legacy-32-bit-applications-on-64-bit-os/51466709#51466709
I’ve tried:
./out/aarch64/buildroot/host/bin/aarch64-linux-gcc -static ~/test/hello_world.c -o data/9p/a.out ./run -aA -F '/mnt/9p/a.out'
but it fails with:
a.out: line 1: syntax error: unexpected word (expecting ")")
Keep in mind that MIPS has the worst support compared to our other architectures due to the smaller community. Patches welcome as usual.
TODOs:
-
networking is not working. See also:
-
GDB step debug does not work properly, does not find
start_kernel
Haven’t tried it, doubt it will work out of the box! :-)
Haven’t tried.
When the Linux kernel finishes booting, it runs an executable as the first and only userland process.
This init process is then responsible for setting up the entire userland (or destroying everything when you want to have fun).
This typically means reading some configuration files (e.g. /etc/initrc) and forking a bunch of userland executables based on those files.
systemd provides a "popular" init implementation for desktop distros as of 2017.
BusyBox provides its own minimalistic init implementation which Buildroot, and therefore this repo, uses by default.
To have more control over the system, you can replace BusyBox’s init with your own.
The -E option replaces init and evals a command from the Kernel command line parameters:
./run -E 'echo "asdf qwer";insmod /hello.ko;/poweroff.out'
It is basically a shortcut for:
./run -e 'init=/eval.sh - lkmc_eval="insmod /hello.ko;/poweroff.out"'
Source: rootfs_overlay/eval.sh.
However, -E is smarter:
-
allows quoting and newlines by using base64 encoding, see: Kernel command line parameters escaping
-
automatically chooses between
init=andrcinit=for you, see: Path to init
so you should almost always use it, unless you are really counting each cycle ;-)
This method replaces BusyBox' init completely, which makes things more minimal, but also has has the following consequences:
-
/etc/fstabmounts are not done, notably/procand/sys, test it out with:./run -E 'echo asdf;ls /proc;ls /sys;echo qwer'
-
no shell is launched at the end of boot for you to interact with the system. You could explicitly add a
shat the end of your commands however:./run -E 'echo hello;sh'
The best way to overcome those limitations is to use: Run command at the end of BusyBox init
If the script is large, you can add it to a gitignored file and pass that to -E as in:
echo ' insmod /hello.ko /poweroff.out ' > gitignore.sh ./run -E "$(cat gitignore.sh)"
or add it to a file to the root filesystem guest and rebuild:
echo '#!/bin/sh insmod /hello.ko /poweroff.out ' > rootfs_overlay/gitignore.sh chmod +x rootfs_overlay/gitignore.sh ./build ./run -e 'init=/gitignore.sh'
Remember that if your init returns, the kernel will panic, there are just two non-panic possibilities:
-
run forever in a loop or long sleep
-
poweroffthe machine
Just using BusyBox' poweroff at the end of the init does not work and the kernel panics:
./run -E poweroff
because BusyBox' poweroff tries to do some fancy stuff like killing init, likely to allow userland to shutdown nicely.
But this fails when we are init itself!
poweroff works more brutally and effectively if you add -f:
./run -E 'poweroff -f'
but why not just use our minimal /poweroff.out and be done with it?
./run -E '/poweroff.out'
Source: kernel_module/user/poweroff.c
This also illustrates how to shutdown the computer from C: https://stackoverflow.com/questions/28812514/how-to-shutdown-linux-using-c-or-qt-without-call-to-system
I dare you to guess what this does:
./run -E '/sleep_forever.out'
This executable is a convenient simple init that does not panic and sleeps instead.
Get a reasonable answer to "how long does boot take?":
./run -F '/time_boot.out'
Dmesg contains a message of type:
[ 2.188242] time_boot.c
which tells us that boot took 2.188242 seconds.
Use the -F option is for you rely on something that BusyBox' init set up for you like /etc/fstab:
./run -F 'echo asdf;ls /proc;ls /sys;echo qwer'
After the commands run, you are left on an interactive shell.
The above command is basically equivalent to:
./run -f 'lkmc_eval="insmod /hello.ko;poweroff.out;"'
where the lkmc_eval option gets evaled by our default S98 startup script if present.
However, -F is smarter and uses base64 encoding, much like -E vs -e, so you will just use -F most of the time.
Alternatively, add them to a new init.d entry to run at the end o the BusyBox init:
cp rootfs_overlay/etc/init.d/S98 rootfs_overlay/etc/init.d/S99.gitignore vim rootfs_overlay/etc/init.d/S99.gitignore ./build ./run
and they will be run automatically before the login prompt.
Scripts under /etc/init.d are run by /etc/init.d/rcS, which gets called by the line ::sysinit:/etc/init.d/rcS in /etc/inittab.
The init is selected at:
-
initrd or initramfs system:
/init, a custom one can be set with therdinit=kernel command line parameter -
otherwise: default is
/sbin/init, followed by some other paths, a custom one can be set withinit=
The kernel parses parameters from the kernel command line up to "-"; if it doesn’t recognize a parameter and it doesn’t contain a '.', the parameter gets passed to init: parameters with '=' go into init’s environment, others are passed as command line arguments to init. Everything after "-" is passed as an argument to init.
And you can try it out with:
./run -e 'init=/init_env_poweroff.sh - asdf=qwer zxcv'
Source: rootfs_overlay/init_env_poweroff.sh.
Also note how the annoying dash - also gets passed as a parameter to init, which makes it impossible to use this method for most executables.
Finally, the docs are lying, arguments with dots that come after - are still treated specially (of the form subsystem.somevalue) and disappear:
./run -e 'init=/init_env_poweroff.sh - /poweroff.out'
We disable networking by default because it starts an userland process, and we want to keep the number of userland processes to a minimum to make the system more understandable.
Enable:
/sbin/ifup -a
Disable:
/sbin/ifdown -a
Test:
wget google.com
BusyBox' ping does not work with hostnames even when networking is working fine:
ping google.com
To enable networking by default, use the methods documented at Automatic startup commands
You can make QEMU or gem5 run faster by passing enabling KVM with:
./run -K
but it was broken in gem5 with pending patches: https://www.mail-archive.com/[email protected]/msg15046.html It fails immediately on:
panic: KVM: Failed to enter virtualized mode (hw reason: 0x80000021)
KVM uses the KVM Linux kernel feature of the host to run most instructions natively.
We don’t enable KVM by default because:
-
only works if the architecture of the guest equals that of the host.
We have only tested / supported it on x86, but it is rumoured that QEMU and gem5 also have ARM KVM support if you are running an ARM desktop for some weird reason :-)
-
limits visibility, since more things are running natively:
-
can’t use GDB
-
can’t do instruction tracing
-
-
kernel boots are already fast enough without
-enable-kvm
The main use case for -enable-kvm in this repository is to test if something that takes a long time to run is functionally correct.
For example, when porting a benchmark to Buildroot, you can first use QEMU’s KVM to test that benchmarks is producing the correct results, before analysing them more deeply in gem5, which runs much slower.
Build and run:
./build -b br2/x11 ./run -x
Inside QEMU:
startx
And then from the GUI you can start exciting graphical programs such as:
xcalc xeyes
We don’t build X11 by default because it takes a considerable amount of time (about 20%), and is not expected to be used by most users: you need to pass the -x flag to enable it.
More details: https://unix.stackexchange.com/questions/70931/how-to-install-x11-on-my-own-linux-buildroot-system/306116#306116
Not sure how well that graphics stack represents real systems, but if it does it would be a good way to understand how it works.
To x11 packages have an xserver prefix as in:
./build -b br2/x11 -- xserver_xorg-server-reconfigure
the easiest way to find them out is to just list out/x86_64/buildroot/build/x*.
TODO as of: c2696c978d6ca88e8b8599c92b1beeda80eb62b2 I noticed that startx leads to a BUG_ON:
[ 2.809104] WARNING: CPU: 0 PID: 51 at drivers/gpu/drm/ttm/ttm_bo_vm.c:304 ttm_bo_vm_open+0x37/0x40
TODO 9076c1d9bcc13b6efdb8ef502274f846d8d4e6a1 I’m 100% sure that it was working before, but I didn’t run it forever, and it stopped working at some point. Needs bisection, on whatever commit last touched x11 stuff.
-show-cursor did not help, I just get to see the host cursor, but the guest cursor still does not move.
Doing:
watch -n 1 grep i8042 /proc/interrupts
shows that interrupts do happen when mouse and keyboard presses are done, so I expect that it is some wrong either with:
-
QEMU. Same behaviour if I try the host’s QEMU 2.10.1 however.
-
X11 configuration. We do have
BR2_PACKAGE_XDRIVER_XF86_INPUT_MOUSE=y.
/var/log/Xorg.0.log contains the following interesting lines:
[ 27.549] (II) LoadModule: "mouse" [ 27.549] (II) Loading /usr/lib/xorg/modules/input/mouse_drv.so [ 27.590] (EE) <default pointer>: Cannot find which device to use. [ 27.590] (EE) <default pointer>: cannot open input device [ 27.590] (EE) PreInit returned 2 for "<default pointer>" [ 27.590] (II) UnloadModule: "mouse"
The file /dev/inputs/mice does not exist.
Note that our current link:kernel_confi_fragment sets:
# CONFIG_INPUT_MOUSE is not set # CONFIG_INPUT_MOUSEDEV_PSAUX is not set
for gem5, so you might want to remove those lines to debug this.
On ARM, startx hangs at a message:
vgaarb: this pci device is not a vga device
and nothing shows on the screen, and:
grep EE /var/log/Xorg.0.log
says:
(EE) Failed to load module "modesetting" (module does not exist, 0)
A friend told me this but I haven’t tried it yet:
-
xf86-video-modesettingis likely the missing ingredient, but it does not seem possible to activate it from Buildroot currently without patching things. -
xf86-video-fbdevshould work as well, but we need to make sure fbdev is enabled, and maybe add some line to theXorg.conf
The kernel can boot from an CPIO file, which is a directory serialization format much like tar: https://superuser.com/questions/343915/tar-vs-cpio-what-is-the-difference
The bootloader, which for us is QEMU itself, is then configured to put that CPIO into memory, and tell the kernel that it is there.
With this setup, you don’t even need to give a root filesystem to the kernel, it just does everything in memory in a ramfs.
To enable initrd instead of the default ext2 disk image, do:
./build -i ./run -i
Notice how it boots fine, even though this leads to not giving QEMU the -drive option, as can be verified with:
cat ./out/x86_64/qemu/0/run.sh
Also as expected, there is no filesystem persistency, since we are doing everything in memory:
date >f poweroff cat f # can't open 'f': No such file or directory
which can be good for automated tests, as it ensures that you are using a pristine unmodified system image every time.
One downside of this method is that it has to put the entire filesystem into memory, and could lead to a panic:
end Kernel panic - not syncing: Out of memory and no killable processes...
This can be solved by increasing the memory with:
./run -im 256M
The main ingredients to get initrd working are:
-
BR2_TARGET_ROOTFS_CPIO=y: make Buildroot generateimages/rootfs.cpioin addition to the other images.It is also possible to compress that image with other options.
-
qemu -initrd: make QEMU put the image into memory and tell the kernel about it. -
CONFIG_BLK_DEV_INITRD=y: Compile the kernel with initrd support, see also: https://unix.stackexchange.com/questions/67462/linux-kernel-is-not-finding-the-initrd-correctly/424496#424496Buildroot forces that option when
BR2_TARGET_ROOTFS_CPIO=yis given
https://unix.stackexchange.com/questions/89923/how-does-linux-load-the-initrd-image asks how the mechanism works in more detail.
Most modern desktop distributions have an initrd in their root disk to do early setup.
The rationale for this is described at: https://en.wikipedia.org/wiki/Initial_ramdisk
One obvious use case is having an encrypted root filesystem: you keep the initrd in an unencrypted partition, and then setup decryption from there.
I think GRUB then knows read common disk formats, and then loads that initrd to memory with a /boot/grub/grub.cfg directive of type:
initrd /initrd.img-4.4.0-108-generic
initramfs is just like initrd, but you also glue the image directly to the kernel image itself.
So the only argument that QEMU needs is the -kernel, no -drive not even -initrd! Pretty cool.
Try it out with:
./build -I -l && ./run -I
The -l (ell) should only be used the first time you move to / from a different root filesystem method (ext2 or cpio) to initramfs to overcome: https://stackoverflow.com/questions/49260466/why-when-i-change-br2-linux-kernel-custom-config-file-and-run-make-linux-reconfi
./build -I && ./run -I
It is interesting to see how this increases the size of the kernel image if you do a:
ls -lh out/x86_64/buildroot/images/bzImage
before and after using initramfs, since the .cpio is now glued to the kernel image.
In the background, it uses BR2_TARGET_ROOTFS_INITRAMFS, and this makes the kernel config option CONFIG_INITRAMFS_SOURCE point to the CPIO that will be embedded in the kernel image.
http://nairobi-embedded.org/initramfs_tutorial.html shows a full manual setup.
TODO we were not able to get it working yet: https://stackoverflow.com/questions/49261801/how-to-boot-the-linux-kernel-with-initrd-or-initramfs-with-gem5
By default, we use a .config that is a mixture of:
-
Buildroot’s minimal per machine
.config, which has the minimal options needed to boot -
our kernel configs which enables options we want to play with
Use just your own exact .config instead:
./build -K data/myconfig -l
Beware that Buildroot can sed override some of the configurations we make no matter what, e.g. it forces CONFIG_BLK_DEV_INITRD=y when BR2_TARGET_ROOTFS_CPIO is on, so you might want to double check as explained at Find the kernel config. TODO check if there is a way to prevent that patching and maybe patch Buildroot for it, it is too fuzzy. People should be able to just build with whatever .config they want.
Modify a single option:
./build -C 'CONFIG_FORTIFY_SOURCE=y' -l
Use an extra kernel config fragment file:
printf ' CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y ' > myconfig ./build -c 'myconfig' -l
-K, -c, -C can all be used at the same time. Options passed via -C take precedence over -c, which takes precedence over -K.
Ge the build config in guest:
zcat /proc/config.gz
or with our shortcut:
/conf.sh
or to conveniently grep for a specific option case insensitively:
/conf.sh ikconfig
Source: rootfs_overlay/conf.sh.
This is enabled by:
CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y
From host:
cat out/*/buildroot/build/linux-custom/.config
Just for fun https://stackoverflow.com/a/14958263/895245:
./linux/scripts/extract-ikconfig out/*/buildroot/build/linux-custom/vmlinux
although this can be useful when someone gives you a random image.
We have managed to come up with minimalistic kernel configs that work for both QEMU and gem5 (oh, the hours of bisection).
Our configs are all based on Buildroot’s configs, which were designed for QEMU, and then on top of those we also add:
-
kernel_config_fragment/min: minimal tweaks required to boot gem5 or for using our slightly different QEMU command line options than Buildroot
-
kernel_config_fragment/default: optional configs that we add by default to our kernel build because they increase visibility, and don’t significantly increase build time nor add significant runtime overhead
Changes to those files automatically trigger kernel reconfigures even without using the linux-reconfigure target, since timestamps are used to decide if changes happened or not.
Having the same config working for both QEMU and gem5 means that you can deal with functional matters in QEMU, which runs much faster, and switch to gem5 only for performance issues.
To see Buildroot’s base configs, have a look at buildroot/configs/qemu_x86_64_defconfig, which our ./build script uses.
That file contains BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/qemu/x86_64/linux-4.11.config", which points to the base config file used.
arm, on the other hand, uses buildroot/configs/qemu_arm_vexpress_defconfig, which contains BR2_LINUX_KERNEL_DEFCONFIG="vexpress", and therefore just does a make vexpress_defconfig.
Other configs which we had previously tested at 4e0d9af81fcce2ce4e777cb82a1990d7c2ca7c1e are:
-
Jason’s magic
x86_64config: http://web.archive.org/web/20171229121642/http://www.lowepower.com/jason/files/config which is referenced at: http://web.archive.org/web/20171229121525/http://www.lowepower.com/jason/setting-up-gem5-full-system.html. QEMU boots with that by removing# CONFIG_VIRTIO_PCI is not set -
armandaarch64configs present in the official ARM gem5 Linux kernel fork: https://gem5.googlesource.com/arm/linux, e.g. for arm v4.9: https://gem5.googlesource.com/arm/linux/+/917e007a4150d26a0aa95e4f5353ba72753669c7/arch/arm/configs/gem5_defconfig. The patches there are just simple optimizations and instrumentation, but they are not needed to boot.
On one hand, we would like to have our configs as a single git file tracked on this repo, to be able to easily refer people ot them. However, that would lose use the ability to:
-
reuse Buildroot’s configs
-
split our configs into
minanddefault
We try to use the latest possible kernel major release version.
In QEMU:
cat /proc/version
or in the source:
cd linux git log | grep -E ' Linux [0-9]+\.' | head
During update all you kernel modules may break since the kernel API is not stable.
They are usually trivial breaks of things moving around headers or to sub-structs.
The userland, however, should simply not break, as Linus enforces strict backwards compatibility of userland interfaces.
This backwards compatibility is just awesome, it makes getting and running the latest master painless.
This also makes this repo the perfect setup to develop the Linux kernel.
When we had a local patchset on top of mainline, this is how we updated it:
# Last point before out patches.
last_mainline_revision=v4.15
next_mainline_revision=v4.16
cd linux
# Create a branch before the rebase in case things go wrong.
git checkout -b "lkmc-${last_mainline_revision}"
git remote set-url origin [email protected]:cirosantilli/linux.git
git push
git checkout master
git remote add up git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
git fetch up
git rebase --onto "$next_mainline_revision" "$last_mainline_revision"
cd ..
./build -lk
# Manually fix broken kernel modules if necessary.
git branch "buildroot-2017.08-linux-${last_mainline_revision}"
git add .
# And update the README to show off.
git commit -m "linux: update to ${next_mainline_revision}"
# Test the heck out of it, especially kernel modules and GDB.
./run
git push
But we have since moved to running just mainline, which makes the update simpler.
The kernel is not forward compatible, however, so downgrading the Linux kernel requires downgrading the userland too to the latest Buildroot branch that supports it.
The default Linux kernel version is bumped in Buildroot with commit messages of type:
linux: bump default to version 4.9.6
So you can try:
git log --grep 'linux: bump default to version'
Those commits change BR2_LINUX_KERNEL_LATEST_VERSION in /linux/Config.in.
You should then look up if there is a branch that supports that kernel. Staying on branches is a good idea as they will get backports, in particular ones that fix the build as newer host versions come out.
The Linux kernel allows passing module parameters at insertion time through the init_module and finit_module system calls:
/params.sh echo $?
Outcome: the test passes:
0
Sources:
As shown in the example, module parameters can also be read and modified at runtime from sysfs.
We can obtain the help text of the parameters with:
modinfo /params.ko
The output contains:
parm: j:my second favorite int parm: i:my favorite int
modprobe insertion can also set default parameters via the /etc/modprobe.conf file:
modprobe params cat /sys/kernel/debug/lkmc_params
Output:
12 34
This is specially important when loading modules with Kernel module dependencies or else we would have no opportunity of passing those.
modprobe.conf doesn’t actually insmod anything for us: https://superuser.com/questions/397842/automatically-load-kernel-module-at-boot-angstrom/1267464#1267464
One module can depend on symbols of another module that are exported with EXPORT_SYMBOL:
/dep.sh echo $?
Outcome: the test passes:
0
Sources:
The kernel deduces dependencies based on the EXPORT_SYMBOL that each module uses.
Symbols exported by EXPORT_SYMBOL can be seen with:
insmod /dep.ko grep lkmc_dep /proc/kallsyms
sample output:
ffffffffc0001030 r __ksymtab_lkmc_dep [dep] ffffffffc000104d r __kstrtab_lkmc_dep [dep] ffffffffc0002300 B lkmc_dep [dep]
This requires CONFIG_KALLSYMS_ALL=y.
Dependency information is stored by the kernel module build system in the .ko files' modinfo, e.g.:
modinfo /dep2.ko
contains:
depends: dep
We can double check with:
strings 3 /dep2.ko | grep -E 'depends'
The output contains:
depends=dep
Module dependencies are also stored at:
cd /lib/module/* grep dep modules.dep
Output:
extra/dep2.ko: extra/dep.ko extra/dep.ko:
TODO: what for, and at which point point does Buildroot / BusyBox generate that file?
Unlike insmod, modprobe deals with kernel module dependencies for us:
modprobe dep2
Removal also removes required modules that have zero usage count:
modprobe -r dep2
Bibliography:
modprobe seems to use information contained in the kernel module itself for the dependencies since modprobe dep2 still works even if we modify modules.dep to remove the dependency.
Module metadata is stored on module files at compile time. Some of the fields can be retrieved through the THIS_MODULE struct module:
insmod /module_info.ko
Dmesg output:
name = module_info version = 1.0
Source: kernel_module/module_info.c
Some of those are also present on sysfs:
cat /sys/module/module_info/version
Output:
1.0
And we can also observe them with the modinfo command line utility:
modinfo /module_info.ko
sample output:
filename: /module_info.ko license: GPL version: 1.0 srcversion: AF3DE8A8CFCDEB6B00E35B6 depends: vermagic: 4.17.0 SMP mod_unload modversions
Module information is stored in a special .modinfo section of the ELF file:
./runtc readelf -SW ./out/x86_64/buildroot/target/module_info.ko
contains:
[ 5] .modinfo PROGBITS 0000000000000000 0000d8 000096 00 A 0 0 8
and:
./runtc readelf -x .modinfo ./out/x86_64/buildroot/target/module_info.ko
gives:
0x00000000 6c696365 6e73653d 47504c00 76657273 license=GPL.vers 0x00000010 696f6e3d 312e3000 61736466 3d717765 ion=1.0.asdf=qwe 0x00000020 72000000 00000000 73726376 65727369 r.......srcversi 0x00000030 6f6e3d41 46334445 38413843 46434445 on=AF3DE8A8CFCDE 0x00000040 42364230 30453335 42360000 00000000 B6B00E35B6...... 0x00000050 64657065 6e64733d 006e616d 653d6d6f depends=.name=mo 0x00000060 64756c65 5f696e66 6f007665 726d6167 dule_info.vermag 0x00000070 69633d34 2e31372e 3020534d 50206d6f ic=4.17.0 SMP mo 0x00000080 645f756e 6c6f6164 206d6f64 76657273 d_unload modvers 0x00000090 696f6e73 2000 ions .
I think a dedicated section is used to allow the Linux kernel and command line tools to easily parse that information from the ELF file as we’ve done with readelf.
Bibliography:
Vermagic is a magic string present in the kernel and on modinfo of kernel modules. It is used to verify that the kernel module was compiled against a compatible kernel version and relevant configuration:
insmod /vermagic.ko
Possible dmesg output:
VERMAGIC_STRING = 4.17.0 SMP mod_unload modversions
Source: kernel_module/vermagic.c
If we artificially create a mismatch with MODULE_INFO(vermagic, the insmod fails with:
insmod: can't insert '/vermagic_fail.ko': invalid module format
and dmesg says the expected and found vermagic found:
vermagic_fail: version magic 'asdfqwer' should be '4.17.0 SMP mod_unload modversions '
Source: kernel_module/vermagic_fail.c
The kernel’s vermagic is defined based on compile time configurations at include/linux/vermagic.h:
#define VERMAGIC_STRING \
UTS_RELEASE " " \
MODULE_VERMAGIC_SMP MODULE_VERMAGIC_PREEMPT \
MODULE_VERMAGIC_MODULE_UNLOAD MODULE_VERMAGIC_MODVERSIONS \
MODULE_ARCH_VERMAGIC \
MODULE_RANDSTRUCT_PLUGIN
The SMP part of the string for example is defined on the same file based on the value of CONFIG_SMP:
#ifdef CONFIG_SMP #define MODULE_VERMAGIC_SMP "SMP " #else #define MODULE_VERMAGIC_SMP ""
TODO how to get the vermagic from running kernel from userland? https://lists.kernelnewbies.org/pipermail/kernelnewbies/2012-October/006306.html
kmod modprobe has a flag to skip the vermagic check:
--force-modversion
This option just strips modversion information from the module before loading, so it is not a kernel feature.
init_module and cleantup_module are an older alternative to the module_init and module_exit macros:
insmod /init_module.ko rmmod init_module
Dmesg output:
init_module cleanup_module
Source: kernel_module/module_init.c
TODO why were module_init and module_exit created? https://stackoverflow.com/questions/3218320/what-is-the-difference-between-module-init-and-init-module-in-a-linux-kernel-mod
To test out kernel panics and oops in controlled circumstances, try out the modules:
insmod /panic.ko insmod /oops.ko
Source:
A panic can also be generated with:
echo c > /proc/sysrq-trigger
Panic vs oops: https://unix.stackexchange.com/questions/91854/whats-the-difference-between-a-kernel-oops-and-a-kernel-panic
How to generate them:
When a panic happens, Shift-PgUp does not work as it normally does, and it is hard to get the logs if on are on Graphic mode:
On panic, the kernel dies, and so does our terminal.
Make the kernel reboot after n seconds after panic:
echo 1 > /proc/sys/kernel/panic
Can also be controlled with the panic= kernel boot parameter.
0 to disable: https://unix.stackexchange.com/questions/29567/how-to-configure-the-linux-kernel-to-reboot-on-panic/29569#29569
The panic trace looks like:
panic: loading out-of-tree module taints kernel. panic myinit Kernel panic - not syncing: hello panic CPU: 0 PID: 53 Comm: insmod Tainted: G O 4.16.0 #6 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014 Call Trace: dump_stack+0x7d/0xba ? 0xffffffffc0000000 panic+0xda/0x213 ? printk+0x43/0x4b ? 0xffffffffc0000000 myinit+0x1d/0x20 [panic] do_one_initcall+0x3e/0x170 do_init_module+0x5b/0x210 load_module+0x2035/0x29d0 ? kernel_read_file+0x7d/0x140 ? SyS_finit_module+0xa8/0xb0 SyS_finit_module+0xa8/0xb0 do_syscall_64+0x6f/0x310 ? trace_hardirqs_off_thunk+0x1a/0x32 entry_SYSCALL_64_after_hwframe+0x42/0xb7 RIP: 0033:0x7ffff7b36206 RSP: 002b:00007fffffffeb78 EFLAGS: 00000206 ORIG_RAX: 0000000000000139 RAX: ffffffffffffffda RBX: 000000000000005c RCX: 00007ffff7b36206 RDX: 0000000000000000 RSI: 000000000069e010 RDI: 0000000000000003 RBP: 000000000069e010 R08: 00007ffff7ddd320 R09: 0000000000000000 R10: 00007ffff7ddd320 R11: 0000000000000206 R12: 0000000000000003 R13: 00007fffffffef4a R14: 0000000000000000 R15: 0000000000000000 Kernel Offset: disabled ---[ end Kernel panic - not syncing: hello panic
Notice how our panic message hello panic is visible at:
Kernel panic - not syncing: hello panic
The log shows which module each symbol belongs to if any, e.g.:
myinit+0x1d/0x20 [panic]
says that the function myinit is in the module panic.
To find the line that panicked, do:
./rungdb
and then:
info line *(myinit+0x1d)
which gives us the correct line:
Line 7 of "/linux-kernel-module-cheat/out/x86_64/buildroot/build/kernel_module-1.0/./panic.c" starts at address 0xbf00001c <myinit+28> and ends at 0xbf00002c <myexit>.
as explained at: https://stackoverflow.com/questions/8545931/using-gdb-to-convert-addresses-to-lines/27576029#27576029
The exact same thing can be done post mortem with:
./out/x86_64/buildroot/host/usr/bin/x86_64-buildroot-linux-uclibc-gdb \ -batch \ -ex 'info line *(myinit+0x1d)' \ ./out/x86_64/buildroot/build/kernel_module-1.0/panic.ko \ ;
Related:
Basically just calls panic("BUG!") for most archs.
Useful to automate bisections.
QEMU:
./run -E 'insmod /panic.ko' -e 'panic=1' -- -no-reboot
gem5: TODO gem5’s config.ini has a system.panic_on_panic param which I bet will work, but it does not seem to be exposed to fs.py.
If CONFIG_KALLSYMS=n, then addresses are shown on traces instead of symbol plus offset.
In v4.16 it does not seem possible to configure that at runtime. GDB step debugging with:
./run -F 'insmod /dump_stack.ko' -du -U dump_stack
shows that traces are printed at arch/x86/kernel/dumpstack.c:
static void printk_stack_address(unsigned long address, int reliable,
char *log_lvl)
{
touch_nmi_watchdog();
printk("%s %s%pB\n", log_lvl, reliable ? "" : "? ", (void *)address);
}
and %pB is documented at Documentation/core-api/printk-formats.rst:
If KALLSYMS are disabled then the symbol address is printed instead.
I wasn’t able do disable CONFIG_KALLSYMS to test this this out however, it is being selected by some other option? But I then used make menuconfig to see which options select it, and they were all off…
On oops, the shell still lives after.
However we:
-
leave the normal control flow, and
oops afternever gets printed: an interrupt is serviced -
cannot
rmmod oopsafterwards
It is possible to make oops lead to panics always with:
echo 1 > /proc/sys/kernel/panic_on_oops insmod /oops.ko
An oops stack trace looks like:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 IP: myinit+0x18/0x30 [oops] PGD dccf067 P4D dccf067 PUD dcc1067 PMD 0 Oops: 0002 [#1] SMP NOPTI Modules linked in: oops(O+) CPU: 0 PID: 53 Comm: insmod Tainted: G O 4.16.0 #6 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014 RIP: 0010:myinit+0x18/0x30 [oops] RSP: 0018:ffffc900000d3cb0 EFLAGS: 00000282 RAX: 000000000000000b RBX: ffffffffc0000000 RCX: ffffffff81e3e3a8 RDX: 0000000000000001 RSI: 0000000000000086 RDI: ffffffffc0001033 RBP: ffffc900000d3e30 R08: 69796d2073706f6f R09: 000000000000013b R10: ffffea0000373280 R11: ffffffff822d8b2d R12: 0000000000000000 R13: ffffffffc0002050 R14: ffffffffc0002000 R15: ffff88000dc934c8 FS: 00007ffff7ff66a0(0000) GS:ffff88000fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000dcd2000 CR4: 00000000000006f0 Call Trace: do_one_initcall+0x3e/0x170 do_init_module+0x5b/0x210 load_module+0x2035/0x29d0 ? SyS_finit_module+0xa8/0xb0 SyS_finit_module+0xa8/0xb0 do_syscall_64+0x6f/0x310 ? trace_hardirqs_off_thunk+0x1a/0x32 entry_SYSCALL_64_after_hwframe+0x42/0xb7 RIP: 0033:0x7ffff7b36206 RSP: 002b:00007fffffffeb78 EFLAGS: 00000206 ORIG_RAX: 0000000000000139 RAX: ffffffffffffffda RBX: 000000000000005c RCX: 00007ffff7b36206 RDX: 0000000000000000 RSI: 000000000069e010 RDI: 0000000000000003 RBP: 000000000069e010 R08: 00007ffff7ddd320 R09: 0000000000000000 R10: 00007ffff7ddd320 R11: 0000000000000206 R12: 0000000000000003 R13: 00007fffffffef4b R14: 0000000000000000 R15: 0000000000000000 Code: <c7> 04 25 00 00 00 00 00 00 00 00 e8 b2 33 09 c1 31 c0 c3 0f 1f 44 RIP: myinit+0x18/0x30 [oops] RSP: ffffc900000d3cb0 CR2: 0000000000000000 ---[ end trace 3cdb4e9d9842b503 ]---
To find the line that oopsed, look at the RIP register:
RIP: 0010:myinit+0x18/0x30 [oops]
and then on GDB:
./rungdb
run
info line *(myinit+0x18)
which gives us the correct line:
Line 7 of "/linux-kernel-module-cheat/out/arm/buildroot/build/kernel_module-1.0/./panic.c" starts at address 0xbf00001c <myinit+28> and ends at 0xbf00002c <myexit>.
This-did not work on arm due to GDB step debug kernel module ARM so we need to either:
-
Kernel module stack trace to source line post-mortem method
The dump_stack function produces a stack trace much like panic and oops, but causes no problems and we return to the normal control flow, and can cleanly remove the module afterwards:
insmod /dump_stack.ko
Source: kernel_module/dump_stack.c
The WARN_ON macro basically just calls dump_stack.
One extra side effect is that we can make it also panic with:
echo 1 > /proc/sys/kernel/panic_on_warn insmod /warn_on.ko
Source: kernel_module/warn_on.c
Can also be activated with the panic_on_warn boot parameter.
Pseudo filesystems are filesystems that don’t represent actual files in a hard disk, but rather allow us to do special operations on filesystem-related system calls.
What each pseudo-file does for each related system call does is defined by its File operations.
Bibliography:
Debugfs is the simplest pseudo filesystem to play around with:
/debugfs.sh echo $?
Outcome: the test passes:
0
Sources:
Debugfs is made specifically to help test kernel stuff. Just mount, set File operations, and we are done.
For this reason, it is the filesystem that we use whenever possible in our tests.
debugfs.sh explicitly mounts a debugfs at a custom location, but the most common mount point is /sys/kernel/debug.
This mount not done automatically by the kernel however: we, like most distros, do it from userland with our fstab.
Debugfs support requires the kernel to be compiled with CONFIG_DEBUG_FS=y.
Only the more basic file operations can be implemented in debugfs, e.g. mmap never gets called:
Bibliography: https://github.com/chadversary/debugfs-tutorial
Procfs is just another fops entry point:
/procfs.sh echo $?
Outcome: the test passes:
0
Procfs is a little less convenient than debugfs, but is more used in serious applications.
Procfs can run all system calls, including ones that debugfs can’t, e.g. mmap.
Sources:
Sysfs is more restricted than procfs, as it does not take an arbitrary file_operations:
/sysfs.sh echo $?
Outcome: the test passes:
0
Sources:
Vs procfs:
You basically can only do open, close, read, write, and lseek on sysfs files.
It is similar to a seq_file file operation, except that write is also implemented.
TODO: what are those kobject structs? Make a more complex example that shows what they can do.
Bibliography:
Character devices can have arbitrary File operations associated to them:
/character_device.sh echo $?
Outcome: the test passes:
0
Sources:
Unlike procfs entires, character device files are created with userland mknod or mknodat syscalls:
mknod </dev/path_to_dev> c <major> <minor>
Intuitively, for physical devices like keyboards, the major number maps to which driver, and the minor number maps to which device it is.
A single driver can drive multiple compatible devices.
The major and minor numbers can be observed with:
ls -l /dev/urandom
Output:
crw-rw-rw- 1 root root 1, 9 Jun 29 05:45 /dev/urandom
which means:
-
c(first letter): this is a character device. Would bebfor a block device. -
1, 9: the major number is1, and the minor9
To avoid device number conflicts when registering the driver we:
-
ask the kernel to allocate a free major number for us with:
register_chrdev(0 -
find ouf which number was assigned by grepping
/proc/devicesfor the kernel module name
Bibliography: https://unix.stackexchange.com/questions/37829/understanding-character-device-or-character-special-files/371758#371758
And also destroy it on rmmod:
/character_device_create.sh echo $?
Outcome: the test passes:
0
Sources:
File operations are the main method of userland driver communication. struct file_operations determines what the kernel will do on filesystem system calls of Pseudo filesystems.
This example illustrates the most basic system calls: open, read, write, close and lseek:
/fops.sh echo $?
Outcome: the test passes:
0
Sources:
Then give this a try:
sh -x /fops.sh
We have put printks on each fop, so this allows you to see which system calls are being made for each command.
No, there no official documentation: http://stackoverflow.com/questions/15213932/what-are-the-struct-file-operations-arguments
Writing trivial read File operations is repetitive and error prone. The seq_file API makes the process much easier for those trivial cases:
/seq_file.sh echo $?
Outcome: the test passes:
0
Sources:
In this example we create a debugfs file that behaves just like a file that contains:
0 1 2
However, we only store a single integer in memory and calculate the file on the fly in an iterator fashion.
seq_file does not provide write: https://stackoverflow.com/questions/30710517/how-to-implement-a-writable-proc-file-by-using-seq-file-in-a-driver-module
Bibliography:
If you have the entire read output upfront, single_open is an even more convenient version of seq_file:
/seq_file.sh echo $?
Outcome: the test passes:
0
Sources:
This example produces a debugfs file that behaves like a file that contains:
ab cd
The poll system call allows an user process to do a non-busy wait on a kernel event:
/poll.sh
Outcome: jiffies gets printed to stdout every second from userland.
Sources:
Typically, we are waiting for some hardware to make some piece of data available available to the kernel.
The hardware notifies the kernel that the data is ready with an interrupt.
To simplify this example, we just fake the hardware interrupts with a kthread that sleeps for a second in an infinite loop.
The ioctl system call is the best way to pass an arbitrary number of parameters to the kernel in a single go:
/ioctl.sh echo $?
Outcome: the test passes:
0
Sources:
ioctl is one of the most important methods of communication with real device drivers, which often take several fields as input.
ioctl takes as input:
-
an integer
request: it usually identifies what type of operation we want to do on this call -
an untyped pointer to memory: can be anything, but is typically a pointer to a
structThe type of the
structoften depends on therequestinputThis
structis defined on a uapi-style C header that is used both to compile the kernel module and the userland executable.The fields of this
structcan be thought of as arbitrary input parameters.
And the output is:
-
an integer return value.
man ioctldocuments:Usually, on success zero is returned. A few
ioctl()requests use the return value as an output parameter and return a nonnegative value on success. On error, -1 is returned, and errno is set appropriately. -
the input pointer data may be overwritten to contain arbitrary output
Bibliography:
The mmap system call allows us to share memory between user and kernel space without copying:
/mmap.sh echo $?
Outcome: the test passes:
0
Sources:
In this example, we make a tiny 4 byte kernel buffer available to user-space, and we then modify it on userspace, and check that the kernel can see the modification.
mmap, like most more complex File operations, does not work with debugfs as of 4.9, so we use a procfs file for it.
Example adapted from: https://coherentmusings.wordpress.com/2014/06/10/implementing-mmap-for-transferring-data-from-user-space-to-kernel-space/
Bibliography:
Anonymous inodes allow getting multiple file descriptors from a single filesystem entry, which reduces namespace pollution compared to creating multiple device files:
/anonymous_inode.sh echo $?
Outcome: the test passes:
0
Sources:
This example gets an anonymous inode via ioctl from a debugfs entry by using anon_inode_getfd.
Reads to that inode return the sequence: 1, 10, 100, … 10000000, 1, 100, …
Netlink sockets offer a socket API for kernel / userland communication:
/netlink.sh echo $?
Outcome: the test passes:
0
Sources:
Launch multiple user requests in parallel to stress our socket:
insmod /netlink.ko sleep=1 for i in `seq 16`; do /netlink.out & done
TODO: what is the advantage over read, write and poll? https://stackoverflow.com/questions/16727212/how-netlink-socket-in-linux-kernel-is-different-from-normal-polling-done-by-appl
Bibliography:
Kernel threads are managed exactly like userland threads; they also have a backing task_struct, and are scheduled with the same mechanism:
insmod /kthread.ko
Source: kernel_module/kthread.c
Outcome: dmesg counts from 0 to 9 once every second infinitely many times:
0 1 2 ... 8 9 0 1 2 ...
The count stops when we rmmod:
rmmod kthread
The sleep is done with usleep_range, see: sleep.
Bibliography:
Let’s launch two threads and see if they actually run in parallel:
insmod /kthreads.ko
Source: kernel_module/kthreads.c
Outcome: two threads count to dmesg from 0 to 9 in parallel.
Each line has output of form:
<thread_id> <count>
Possible very likely outcome:
1 0 2 0 1 1 2 1 1 2 2 2 1 3 2 3
The threads almost always interleaved nicely, thus confirming that they are actually running in parallel.
Count to dmesg every one second from 0 up to n - 1:
insmod /sleep.ko n=5
Source: kernel_module/sleep.c
The sleep is done with a call to usleep_range directly inside module_init for simplicity.
Bibliography:
A more convenient front-end for kthread:
insmod /workqueue_cheat.ko
Outcome: count from 0 to 9 infinitely many times
Stop counting:
rmmod workqueue_cheat
Source: kernel_module/workqueue_cheat.c
The workqueue thread is killed after the worker function returns.
We can’t call the module just workqueue.c because there is already a built-in with that name: https://unix.stackexchange.com/questions/364956/how-can-insmod-fail-with-kernel-module-is-already-loaded-even-is-lsmod-does-not
Count from 0 to 9 every second infinitely many times by scheduling a new work item from a work item:
insmod /work_from_work.ko
Stop:
rmmod work_from_work
The sleep is done indirectly through: queue_delayed_work, which waits the specified time before scheduling the work.
Source: kernel_module/work_from_work.c
Let’s block the entire kernel! Yay:
./run -F 'dmesg -n 1;insmod /schedule.ko schedule=0'
Outcome: the system hangs, the only way out is to kill the VM.
Source: kernel_module/schedule.c
kthreads only allow interrupting if you call schedule(), and the schedule=0 kernel module parameter turns it off.
Sleep functions like usleep_range also end up calling schedule.
If we allow schedule() to be called, then the system becomes responsive:
./run -F 'dmesg -n 1;insmod /schedule.ko schedule=1'
and we can observe the counting with:
dmesg -w
The system also responds if we add another core:
./run -c 2 -F 'dmesg -n 1;insmod /schedule.ko schedule=0'
Wait queues are a way to make a thread sleep until an event happens on the queue:
insmod /wait_queue.c
Dmesg output:
0 0 1 0 2 0 # Wait one second. 0 1 1 1 2 1 # Wait one second. 0 2 1 2 2 2 ...
Stop the count:
rmmod wait_queue
Source: kernel_module/wait_queue.c
This example launches three threads:
-
one thread generates events every with
wake_up -
the other two threads wait for that with
wait_event, and print a dmesg when it happens.The
wait_eventmacro works a bit like:while (!cond) sleep_until_event
Count from 0 to 9 infinitely many times in 1 second intervals using timers:
insmod /timer.ko
Stop counting:
rmmod timer
Source: kernel_module/timer.c
Timers are callbacks that run when an interrupt happens, from the interrupt context itself.
Therefore they produce more accurate timing than thread scheduling, which is more complex, but you can’t do too much work inside of them.
Bibliography:
Brute force monitor every shared interrupt that will accept us:
./run -F 'insmod /irq.ko' -x
Source: kernel_module/irq.c.
Now try the following:
-
press a keyboard key and then release it after a few seconds
-
press a mouse key, and release it after a few seconds
-
move the mouse around
Outcome: dmesg shows which IRQ was fired for each action through messages of type:
handler irq = 1 dev = 250
dev is the character device for the module and never changes, as can be confirmed by:
grep lkmc_irq /proc/devices
The IRQs that we observe are:
-
1for keyboard press and release.If you hold the key down for a while, it starts firing at a constant rate. So this happens at the hardware level!
-
12mouse actions
This only works if for IRQs for which the other handlers are registered as IRQF_SHARED.
We can see which ones are those, either via dmesg messages of type:
genirq: Flags mismatch irq 0. 00000080 (myirqhandler0) vs. 00015a00 (timer) request_irq irq = 0 ret = -16 request_irq irq = 1 ret = 0
which indicate that 0 is not, but 1 is, or with:
cat /proc/interrupts
which shows:
0: 31 IO-APIC 2-edge timer 1: 9 IO-APIC 1-edge i8042, myirqhandler0
so only 1 has myirqhandler0 attached but not 0.
The QEMU monitor also has some interrupt statistics for x86_64:
./qemumonitor info irq
TODO: properly understand how each IRQ maps to what number.
The Linux kernel v4.16 mainline also has a dummy-irq module at drivers/misc/dummy-irq.c for monitoring a single IRQ.
We build it by default with:
CONFIG_DUMMY_IRQ=m
And then you can do
./run -x
and in guest:
modprobe dummy-irq irq=1
Outcome: when you click a key on the keyboard, dmesg shows:
dummy-irq: interrupt occurred on IRQ 1
However, this module is intended to fire only once as can be seen from its source:
static int count = 0;
if (count == 0) {
printk(KERN_INFO "dummy-irq: interrupt occurred on IRQ %d\n",
irq);
count++;
}
and furthermore interrupt 1 and 12 happen immediately TODO why, were they somehow pending?
So so see something interesting, you need to monitor an interrupt that is more rare than the keyboard, e.g. platform_device.
In the guest on Graphic mode:
watch -n 1 cat /proc/interrupts
Then see how clicking the mouse and keyboard affect the interrupt counts.
This confirms that:
-
1: keyboard
-
12: mouse click and drags
The module also shows which handlers are registered for each IRQ, as we have observed at irq.ko
When in text mode, we can also observe interrupt line 4 with handler ttyS0 increase continuously as IO goes through the UART.
Convert a string to an integer:
/kstrto.sh echo $?
Outcome: the test passes:
0
Sources:
Convert a virtual address to physical:
insmod /virt_to_phys.ko cat /sys/kernel/debug/lkmc_virt_to_phys
Source: kernel_module/virt_to_phys.c
Sample output:
*kmalloc_ptr = 0x12345678 kmalloc_ptr = ffff88000e169ae8 virt_to_phys(kmalloc_ptr) = 0xe169ae8 static_var = 0x12345678 &static_var = ffffffffc0002308 virt_to_phys(&static_var) = 0x40002308
We can confirm that the kmalloc_ptr translation worked with:
./qemumonitor 'xp 0xe169ae8'
which reads four bytes from a given physical address, and gives the expected:
000000000e169ae8: 0x12345678
TODO it only works for kmalloc however, for the static variable:
./qemumonitor 'xp 0x40002308'
it gave a wrong value of 00000000.
Bibliography:
Only tested in x86_64.
The Linux kernel exposes physical addresses to userland through:
-
/proc/<pid>/maps -
/proc/<pid>/pagemap -
/dev/mem
In this section we will play with them.
First get a virtual address to play with:
/virt_to_phys_test.out &
Sample output:
vaddr 0x600800 pid 110
The program:
-
allocates a
volatilevariable and sets is value to0x12345678 -
prints the virtual address of the variable, and the program PID
-
runs a while loop until until the value of the variable gets mysteriously changed somehow, e.g. by nasty tinkerers like us
Then, translate the virtual address to physical using /proc/<pid>/maps and /proc/<pid>/pagemap:
/virt_to_phys_user.out 110 0x600800
Sample output physical address:
0x7c7b800
Now we can verify that virt_to_phys_user.out gave the correct physical address in the following ways:
Bibliography:
The xp QEMU monitor command reads memory at a given physical address.
First launch virt_to_phys_user.out as described at Userland physical address experiments.
On a second terminal, use QEMU to read the physical address:
./qemumonitor 'xp 0x7c7b800'
Output:
0000000007c7b800: 0x12345678
Yes!!! We read the correct value from the physical address.
We could not find however to write to memory from the QEMU monitor, boring.
/dev/mem exposes access to physical addresses, and we use it through the convenient devmem BusyBox utility.
First launch virt_to_phys_user.out as described at Userland physical address experiments.
Next, read from the physical address:
devmem 0x7c7b800
Possible output:
Memory mapped at address 0x7ff7dbe01000. Value at address 0X7C7B800 (0x7ff7dbe01800): 0x12345678
which shows that the physical memory contains the expected value 0x12345678.
0x7ff7dbe01000 is a new virtual address that devmem maps to the physical address to be able to read from it.
Modify the physical memory:
devmem 0x7c7b800 w 0x9abcdef0
After one second, we see on the screen:
i 9abcdef0 [1]+ Done /virt_to_phys_test.out
so the value changed, and the while loop exited!
This example requires:
-
CONFIG_STRICT_DEVMEM=n, otherwisedevmemfails with:devmem: mmap: Operation not permitted
-
nopatkernel parameter
which we set by default.
Dump the physical address of all pages mapped to a given process using /proc/<pid>/maps and /proc/<pid>/pagemap.
First launch virt_to_phys_user.out as described at Userland physical address experiments. Suppose that the output was:
# /virt_to_phys_test.out & vaddr 0x601048 pid 63 # /virt_to_phys_user.out 63 0x601048 0x1a61048
Now obtain the page map for the process:
/pagemap_dump.out 63
Sample output excerpt:
vaddr pfn soft-dirty file/shared swapped present library 400000 1ede 0 1 0 1 /virt_to_phys_test.out 600000 1a6f 0 0 0 1 /virt_to_phys_test.out 601000 1a61 0 0 0 1 /virt_to_phys_test.out 602000 2208 0 0 0 1 [heap] 603000 220b 0 0 0 1 [heap] 7ffff78ec000 1fd4 0 1 0 1 /lib/libuClibc-1.0.30.so
Adapted from: https://github.com/dwks/pagemap/blob/8a25747bc79d6080c8b94eac80807a4dceeda57a/pagemap2.c
Meaning of the flags:
-
vaddr: first virtual address of a page the belongs to the process. Notably:./runtc readelf -l out/x86_64/buildroot/build/kernel_module-1.0/user/virt_to_phys_test.out
contains:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align ... LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000 0x000000000000075c 0x000000000000075c R E 0x200000 LOAD 0x0000000000000e98 0x0000000000600e98 0x0000000000600e98 0x00000000000001b4 0x0000000000000218 RW 0x200000 Section to Segment mapping: Segment Sections... ... 02 .interp .hash .dynsym .dynstr .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame 03 .ctors .dtors .jcr .dynamic .got.plt .data .bssfrom which we deduce that:
-
400000is the text segment -
600000is the data segment
-
-
pfn: add three zeroes to it, and you have the physical address.Three zeroes is 12 bits which is 4kB, which is the size of a page.
For example, the virtual address
0x601000haspfnof0x1a61, which means that its physical address is0x1a61000This is consistent with what
virt_to_phys_user.outtold us: the virtual address0x601048has physical address0x1a61048.048corresponds to the three last zeroes, and is the offset within the page.Also, this value falls inside
0x601000, which as previously analyzed is the data section, which is the normal location for global variables such as ours. -
soft-dirty: TODO -
file/shared: TODO.1seems to indicate that the page can be shared across processes, possibly for read-only pages? E.g. the text segment has1, but the data has0. -
swapped: TODO swapped to disk? -
present: TODO vs swapped? -
library: which executable owns that page
This program works in two steps:
-
parse the human readable lines lines from
/proc/<pid>/maps. This files contains lines of form:7ffff7b6d000-7ffff7bdd000 r-xp 00000000 fe:00 658 /lib/libuClibc-1.0.22.so
which tells us that:
-
7f8af99f8000-7f8af99ff000is a virtual address range that belong to the process, possibly containing multiple pages. -
/lib/libuClibc-1.0.22.sois the name of the library that owns that memory
-
-
loop over each page of each address range, and ask
/proc/<pid>/pagemapfor more information about that page, including the physical address
Good overviews:
-
http://www.brendangregg.com/blog/2015-07-08/choosing-a-linux-tracer.html by Brendan Greg, AKA the master of tracing. Also: https://github.com/brendangregg/perf-tools
I hope to have examples of all methods some day, since I’m obsessed with visibility.
Logs proc events such as process creation to a netlink socket.
We then have a userland program that listens to the events and prints them out:
# /proc_events.out & # set mcast listen ok # sleep 2 & sleep 1 fork: parent tid=48 pid=48 -> child tid=79 pid=79 fork: parent tid=48 pid=48 -> child tid=80 pid=80 exec: tid=80 pid=80 exec: tid=79 pid=79 # exit: tid=80 pid=80 exit_code=0 exit: tid=79 pid=79 exit_code=0 echo a a #
Source: kernel_module/user/proc_events.c
TODO: why exit: tid=79 shows after exit: tid=80?
Note how echo a is a Bash built-in, and therefore does not spawn a new process.
TODO: why does this produce no output?
/proc_events.out >f &
TODO can you get process data such as UID and process arguments? It seems not since exec_proc_event contains so little data: https://github.com/torvalds/linux/blob/v4.16/include/uapi/linux/cn_proc.h#L80 We could try to immediately read it from /proc, but there is a risk that the process finished and another one took its PID, so it wouldn’t be reliable.
0111ca406bdfa6fd65a2605d353583b4c4051781 was failing with:
>>> kernel_module 1.0 Building
/usr/bin/make -j8 -C '/linux-kernel-module-cheat//out/aarch64/buildroot/build/kernel_module-1.0/user' BR2_PACKAGE_OPENBLAS="" CC="/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-gcc" LD="/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-ld"
/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-gcc -ggdb3 -fopenmp -O0 -std=c99 -Wall -Werror -Wextra -o 'proc_events.out' 'proc_events.c'
In file included from /linux-kernel-module-cheat//out/aarch64/buildroot/host/aarch64-buildroot-linux-uclibc/sysroot/usr/include/signal.h:329:0,
from proc_events.c:12:
/linux-kernel-module-cheat//out/aarch64/buildroot/host/aarch64-buildroot-linux-uclibc/sysroot/usr/include/sys/ucontext.h:50:16: error: field ‘uc_mcontext’ has incomplete type
mcontext_t uc_mcontext;
^~~~~~~~~~~
so we commented it out.
Related threads:
If we try to naively update uclibc to 1.0.29 with buildroot_override, which contains the above mentioned patch, clean aarch64 test build fails with:
../utils/ldd.c: In function 'elf_find_dynamic':
../utils/ldd.c:238:12: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
return (void *)byteswap_to_host(dynp->d_un.d_val);
^
/tmp/user/20321/cciGScKB.o: In function `process_line_callback':
msgmerge.c:(.text+0x22): undefined reference to `escape'
/tmp/user/20321/cciGScKB.o: In function `process':
msgmerge.c:(.text+0xf6): undefined reference to `poparser_init'
msgmerge.c:(.text+0x11e): undefined reference to `poparser_feed_line'
msgmerge.c:(.text+0x128): undefined reference to `poparser_finish'
collect2: error: ld returned 1 exit status
Makefile.in:120: recipe for target '../utils/msgmerge.host' failed
make[2]: *** [../utils/msgmerge.host] Error 1
make[2]: *** Waiting for unfinished jobs....
/tmp/user/20321/ccF8V8jF.o: In function `process':
msgfmt.c:(.text+0xbf3): undefined reference to `poparser_init'
msgfmt.c:(.text+0xc1f): undefined reference to `poparser_feed_line'
msgfmt.c:(.text+0xc2b): undefined reference to `poparser_finish'
collect2: error: ld returned 1 exit status
Makefile.in:120: recipe for target '../utils/msgfmt.host' failed
make[2]: *** [../utils/msgfmt.host] Error 1
package/pkg-generic.mk:227: recipe for target '/data/git/linux-kernel-module-cheat/out/aarch64/buildroot/build/uclibc-custom/.stamp_built' failed
make[1]: *** [/data/git/linux-kernel-module-cheat/out/aarch64/buildroot/build/uclibc-custom/.stamp_built] Error 2
Makefile:79: recipe for target '_all' failed
make: *** [_all] Error 2
Buildroot master has already moved to uclibc 1.0.29 at f8546e836784c17aa26970f6345db9d515411700, but it is not yet in any tag… so I’m not tempted to update it yet just for this.
Trace a single function:
cd /sys/kernel/debug/tracing/ # Stop tracing. echo 0 > tracing_on # Clear previous trace. echo > trace # List the available tracers, and pick one. cat available_tracers echo function > current_tracer # List all functions that can be traced # cat available_filter_functions # Choose one. echo __kmalloc > set_ftrace_filter # Confirm that only __kmalloc is enabled. cat enabled_functions echo 1 > tracing_on # Latest events. head trace # Observe trace continuously, and drain seen events out. cat trace_pipe &
Sample output:
# tracer: function
#
# entries-in-buffer/entries-written: 97/97 #P:1
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
head-228 [000] .... 825.534637: __kmalloc <-load_elf_phdrs
head-228 [000] .... 825.534692: __kmalloc <-load_elf_binary
head-228 [000] .... 825.534815: __kmalloc <-load_elf_phdrs
head-228 [000] .... 825.550917: __kmalloc <-__seq_open_private
head-228 [000] .... 825.550953: __kmalloc <-tracing_open
head-229 [000] .... 826.756585: __kmalloc <-load_elf_phdrs
head-229 [000] .... 826.756627: __kmalloc <-load_elf_binary
head-229 [000] .... 826.756719: __kmalloc <-load_elf_phdrs
head-229 [000] .... 826.773796: __kmalloc <-__seq_open_private
head-229 [000] .... 826.773835: __kmalloc <-tracing_open
head-230 [000] .... 827.174988: __kmalloc <-load_elf_phdrs
head-230 [000] .... 827.175046: __kmalloc <-load_elf_binary
head-230 [000] .... 827.175171: __kmalloc <-load_elf_phdrs
Trace all possible functions, and draw a call graph:
echo 1 > max_graph_depth echo 1 > events/enable echo function_graph > current_tracer
Sample output:
# CPU DURATION FUNCTION CALLS
# | | | | | | |
0) 2.173 us | } /* ntp_tick_length */
0) | timekeeping_update() {
0) 4.176 us | ntp_get_next_leap();
0) 5.016 us | update_vsyscall();
0) | raw_notifier_call_chain() {
0) 2.241 us | notifier_call_chain();
0) + 19.879 us | }
0) 3.144 us | update_fast_timekeeper();
0) 2.738 us | update_fast_timekeeper();
0) ! 117.147 us | }
0) | _raw_spin_unlock_irqrestore() {
0) 4.045 us | _raw_write_unlock_irqrestore();
0) + 22.066 us | }
0) ! 265.278 us | } /* update_wall_time */
TODO: what do + and ! mean?
Each enable under the events/ tree enables a certain set of functions, the higher the enable more functions are enabled.
TODO: can you get function arguments? https://stackoverflow.com/questions/27608752/does-ftrace-allow-capture-of-system-call-arguments-to-the-linux-kernel-or-only
kprobes is an instrumentation mechanism that injects arbitrary code at a given address in a trap instruction, much like GDB. Oh, the good old kernel. :-)
./build -C 'CONFIG_KPROBES=y'
Then on guest:
insmod /kprobe_example.ko sleep 4 & sleep 4 &'
Outcome: dmesg outputs on every fork:
<_do_fork> pre_handler: p->addr = 0x00000000e1360063, ip = ffffffff810531d1, flags = 0x246 <_do_fork> post_handler: p->addr = 0x00000000e1360063, flags = 0x246 <_do_fork> pre_handler: p->addr = 0x00000000e1360063, ip = ffffffff810531d1, flags = 0x246 <_do_fork> post_handler: p->addr = 0x00000000e1360063, flags = 0x246
Source: kernel_module/kprobe_example.c
TODO: it does not work if I try to immediately launch sleep, why?
insmod /kprobe_example.ko && sleep 4 & sleep 4 &
I don’t think your code can refer to the surrounding kernel code however: the only visible thing is the value of the registers.
You can then hack it up to read the stack and read argument values, but do you really want to?
There is also a kprobes + ftrace based mechanism with CONFIG_KPROBE_EVENTS=y which does read the memory for us based on format strings that indicate type… https://github.com/torvalds/linux/blob/v4.16/Documentation/trace/kprobetrace.txt Horrendous. Used by: https://github.com/brendangregg/perf-tools/blob/98d42a2a1493d2d1c651a5c396e015d4f082eb20/execsnoop
Bibliography:
Results (boot not excluded):
| Commit | Arch | Simulator | Instruction count |
|---|---|---|---|
7228f75ac74c896417fb8c5ba3d375a14ed4d36b |
arm |
QEMU |
680k |
7228f75ac74c896417fb8c5ba3d375a14ed4d36b |
arm |
gem5 AtomicSimpleCPU |
160M |
7228f75ac74c896417fb8c5ba3d375a14ed4d36b |
arm |
gem5 HPI |
155M |
7228f75ac74c896417fb8c5ba3d375a14ed4d36b |
x86_64 |
QEMU |
3M |
7228f75ac74c896417fb8c5ba3d375a14ed4d36b |
x86_64 |
gem5 AtomicSimpleCPU |
528M |
QEMU:
./trace-boot -a x86_64
sample output:
instruction count all: 1833863 entry address: 0x1000000 instruction count firmware: 20708
gem5:
./run -a aarch64 -g -E 'm5 exit' # Or: # ./run -a aarch64 -g -E 'm5 exit' -- --cpu-type=HPI --caches ./gem5-stat -a aarch64 sim_insts
Notes:
-
0x1000000is the address where QEMU puts the Linux kernel at with-kernelin x86.It can be found from:
./runtc readelf -e out/x86_64/buildroot/build/linux-*/vmlinux | grep Entry
TODO confirm further. If I try to break there with:
./rungdb *0x1000000
but I have no corresponding source line. Also note that this line is not actually the first line, since the kernel messages such as
early console in extract_kernelhave already shown on screen at that point. This does not break at all:./rungdb extract_kernel
It only appears once on every log I’ve seen so far, checked with
grep 0x1000000 trace.txtThen when we count the instructions that run before the kernel entry point, there is only about 100k instructions, which is insignificant compared to the kernel boot itself.
TODO
-a armand-a aarch64does not count firmware instructions properly because the entry point address of the ELF file does not show up on the trace at all. -
We can also discount the instructions after
initruns by usingreadelfto get the initial address ofinit. One easy way to do that now is to just run:./rungdb-user kernel_module-1.0/user/poweroff.out main
And get that from the traces, e.g. if the address is
4003a0, then we search:grep -n 4003a0 trace.txt
I have observed a single match for that instruction, so it must be the init, and there were only 20k instructions after it, so the impact is negligible.
-
to disable networking. Is replacing
initenough?CONFIG_NET=ndid not significantly reduce instruction counts, so maybe replacinginitis enough. -
gem5 simulates memory latencies. So I think that the CPU loops idle while waiting for memory, and counts will be higher.
Make it harder to get hacked and easier to notice that you were, at the cost of some (small?) runtime overhead.
Detects buffer overflows for us:
./build -C 'CONFIG_FORTIFY_SOURCE=y' -L fortify -k ./run -F 'insmod /strlen_overflow.ko' -L fortify
Possible dmesg output:
strlen_overflow: loading out-of-tree module taints kernel. detected buffer overflow in strlen ------------[ cut here ]------------
followed by a trace.
You may not get this error because this depends on strlen overflowing at least until the next page: if a random \0 appears soon enough, it won’t blow up as desired.
TODO not always reproducible. Find a more reproducible failure. I could not observe it on:
insmod /memcpy_overflow.ko
Source: kernel_module/strlen_overflow.c
I once got UML running on a minimal Buildroot setup at: https://unix.stackexchange.com/questions/73203/how-to-create-rootfs-for-user-mode-linux-on-fedora-18/372207#372207
But in part because it is dying, I didn’t spend much effort to integrate it into this repo, although it would be a good fit in principle, since it is essentially a virtualization method.
Maybe some brave soul will send a pull request one day.
UIO is a kernel subsystem that allows to do certain types of driver operations from userland.
This would be awesome to improve debugability and safety of kernel modules.
VFIO looks like a newer and better UIO replacement, but there do not exist any examples of how to use it: https://stackoverflow.com/questions/49309162/interfacing-with-qemu-edu-device-via-userspace-i-o-uio-linux-driver
TODO get something interesting working. I currently don’t understand the behaviour very well.
TODO how to ACK interrupts? How to ensure that every interrupt gets handled separately?
TODO how to write to registers. Currently using /dev/mem and lspci.
This example should handle interrupts from userland and print a message to stdout:
/uio_read.sh
TODO: what is the expected behaviour? I should have documented this when I wrote this stuff, and I’m that lazy right now that I’m in the middle of a refactor :-)
UIO interface in a nutshell:
-
blocking read / poll: waits until interrupts
-
write: callirqcontrolcallback. Default: 0 or 1 to enable / disable interrupts. -
mmap: access device memory
Sources:
Bibliography:
-
https://stackoverflow.com/questions/15286772/userspace-vs-kernel-space-driver
-
https://01.org/linuxgraphics/gfx-docs/drm/driver-api/uio-howto.html
-
https://stackoverflow.com/questions/7986260/linux-interrupt-handling-in-user-space
-
https://yurovsky.github.io/2014/10/10/linux-uio-gpio-interrupt/
-
https://github.com/bmartini/zynq-axis/blob/65a3a448fda1f0ea4977adfba899eb487201853d/dev/axis.c
-
https://yurovsky.github.io/2014/10/10/linux-uio-gpio-interrupt/
-
http://nairobi-embedded.org/uio_example.html that website has QEMU examples for everything as usual. The example has a kernel-side which creates the memory mappings and is used by the user.
-
userland driver stability questions:
Requires Graphic mode.
You can also try those on the Ctrl-Alt-F3 of your Ubuntu host, but it is much more fun inside a VM!
Stop the cursor from blinking:
echo 0 > /sys/class/graphics/fbcon/cursor_blink
Rotate the console 90 degrees! https://askubuntu.com/questions/237963/how-do-i-rotate-my-display-when-not-using-an-x-server
echo 1 > /sys/class/graphics/fbcon/rotate
Relies on: CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y.
Documented under: Documentation/fb/.
TODO: font and keymap. Mentioned at: https://cmcenroe.me/2017/05/05/linux-console.html and I think can be done with BusyBox loadkmap and loadfont, we just have to understand their formats, related:
Requires Graphic mode.
Let’s have some fun.
I think most are implemented under:
drivers/tty
TODO find all.
Scroll up / down the terminal:
Shift-PgDown Shift-PgUp
Or inside ./qemumonitor:
sendkey shift-pgup sendkey shift-pgdown
Reboot guest:
Ctrl-Alt-Del
Enabled from our rootfs_overlay/etc/inittab:
::ctrlaltdel:/sbin/reboot
Under the hood, behaviour is controlled by the reboot syscall:
man 2 reboot
reboot calls can set either of the these behaviours for Ctrl-Alt-Del:
-
do a hard shutdown syscall. Set in ublibc C code with:
reboot(RB_ENABLE_CAD)
or from procfs with:
echo 1 > /proc/sys/kernel/ctrl-alt-del
-
send a SIGINT to the init process. This is what BusyBox' init does, and it then execs the string set in
inittab.Set in uclibc C code with:
reboot(RB_DISABLE_CAD)
or from procfs with:
echo 0 > /proc/sys/kernel/ctrl-alt-del
Minimal example:
./run -e 'init=/ctrl_alt_del.out' -x
When you hit Ctrl-Alt-Del in the guest, our tiny init handles a SIGINT sent by the kernel and outputs to stdout:
cad
To map between man 2 reboot and the uclibc RB_* magic constants see:
less out/x86_64/buildroot/build/uclibc-*/include/sys/reboot.h
The procfs mechanism is documented at:
less linux/Documentation/sysctl/kernel.txt
which says:
When the value in this file is 0, ctrl-alt-del is trapped and sent to the init(1) program to handle a graceful restart. When, however, the value is > 0, Linux's reaction to a Vulcan Nerve Pinch (tm) will be an immediate reboot, without even syncing its dirty buffers. Note: when a program (like dosemu) has the keyboard in 'raw' mode, the ctrl-alt-del is intercepted by the program before it ever reaches the kernel tty layer, and it's up to the program to decide what to do with it.
Bibliography:
We cannot test these actual shortcuts on QEMU since the host captures them at a lower level, but from:
./qemumonitor
we can for example crash the system with:
sendkey alt-sysrq-c
Same but boring because no magic key:
echo c > /proc/sysrq-trigger
Implemented in:
drivers/tty/sysrq.c
On your host, on modern systems that don’t have the SysRq key you can do:
Alt-PrtSc-space
which prints a message to dmesg of type:
sysrq: SysRq : HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) show-blocked-tasks(w) dump-ftrace-buffer(z)
Individual SysRq can be enabled or disabled with the bitmask:
/proc/sys/kernel/sysrq
The bitmask is documented at:
less linux/Documentation/admin-guide/sysrq.rst
Bibliography: https://en.wikipedia.org/wiki/Magic_SysRq_key
In order to play with TTYs, do this:
printf ' tty2::respawn:/sbin/getty -n -L -l /loginroot.sh tty2 0 vt100 tty3::respawn:-/bin/sh tty4::respawn:/sbin/getty 0 tty4 tty63::respawn:-/bin/sh ::respawn:/sbin/getty -L ttyS0 0 vt100 ::respawn:/sbin/getty -L ttyS1 0 vt100 ::respawn:/sbin/getty -L ttyS2 0 vt100 # Leave one serial empty. #::respawn:/sbin/getty -L ttyS3 0 vt100 ' >> rootfs_overlay/etc/inittab ./build ./run -x -- \ -serial telnet::1235,server,nowait \ -serial vc:800x600 \ -serial telnet::1236,server,nowait \ ;
and on a second shell:
telnet localhost 1235
We don’t add more TTYs by default because it would spawn more processes, even if we use askfirst instead of respawn.
On the GUI, switch TTYs with:
-
Alt-LeftorAlt-Right:go to previous / next populated/dev/ttyNTTY. Skips over empty TTYs. -
Alt-Fn: go to the nth TTY. If it is not populated, don’t go there. -
chvt <n>: go to the n-th virtual TTY, even if it is empty: https://superuser.com/questions/33065/console-commands-to-change-virtual-ttys-in-linux-and-openbsd
You can also test this on most hosts such as Ubuntu 18.04, except that when in the GUI, you must use Ctrl-Alt-Fx to switch to another terminal.
Next, we also have the following shells running on the serial ports, hit enter to activate them:
-
/dev/ttyS0: first shell that was used to run QEMU, corresponds to QEMU’s-serial mon:stdio.It would also work if we used
-serial stdio, but:-
Ctrl-Cwould kill QEMU instead of going to the guest -
Ctrl-A Cwouldn’t open the QEMU console there
-
-
/dev/ttyS1: second shell runningtelnet -
/dev/ttyS2: go on the GUI and enterCtrl-Alt-2, corresponds to QEMU’s-serial vc. Go back to the main console withCtrl-Alt-1.
although we cannot change between terminals from there.
Each populated TTY contains a "shell":
-
-/bin/sh: goes directly into anshwithout a login prompt. Don’t forget the dash-: https://askubuntu.com/questions/902998/how-to-check-which-tty-am-i-usingTODO: does not work for the
ttyS*terminals. Why? -
/sbin/gettyasks for password, and then gives you anshWe can overcome the password prompt with the
-l /loginroot.shtechnique explained at: https://askubuntu.com/questions/902998/how-to-check-which-tty-am-i-using but I don’t see any advantage over-/bin/shcurrently.
Identify the current TTY with the command:
tty
Bibliography:
-
https://unix.stackexchange.com/questions/270272/how-to-get-the-tty-in-which-bash-is-running/270372
-
https://unix.stackexchange.com/questions/187319/how-to-get-the-real-name-of-the-controlling-terminal
-
https://unix.stackexchange.com/questions/77796/how-to-get-the-current-terminal-name
-
https://askubuntu.com/questions/902998/how-to-check-which-tty-am-i-using
This outputs:
-
/dev/consolefor the initial GUI terminal. But I think it is the same as/dev/tty1, because if I try to dotty1::respawn:-/bin/sh
it makes the terminal go crazy, as if multiple processes are randomly eating up the characters.
-
/dev/ttyNfor the other graphic TTYs. Note that there are only 63 available ones, from/dev/tty1to/dev/tty63(/dev/tty0is the current one): https://superuser.com/questions/449781/why-is-there-so-many-linux-dev-tty. I think this is determined by:#define MAX_NR_CONSOLES 63
in
linux/include/uapi/linux/vt.h. -
/dev/ttySNfor the text shells.These are Serial ports, see this to understand what those represent physically: https://unix.stackexchange.com/questions/307390/what-is-the-difference-between-ttys0-ttyusb0-and-ttyama0-in-linux/367882#367882
There are only 4 serial ports, I think this is determined by QEMU. TODO check.
Get the TTY in bulk for all processes:
/psa.sh
Source: rootfs_overlay/psa.sh.
The TTY appears under the TT section, which is enabled by -o tty. This shows the TTY device number, e.g.:
4,1
and we can then confirm it with:
ls -l /dev/tty1
Next try:
insmod /kthread.ko
and switch between virtual terminals, to understand that the dmesg goes to whatever current virtual terminal you are on, but not the others, and not to the serial terminals.
Bibliography:
TODO: how to place an sh directly on a TTY as well without getty?
If I try the exact same command that the inittab is doing from a regular shell after boot:
/sbin/getty 0 tty1
it fails with:
getty: setsid: Operation not permitted
The following however works:
./run -E 'getty 0 tty1 & getty 0 tty2 & getty 0 tty3 & sleep 99999999' -x
presumably because it is being called from init directly?
Outcome: Alt-Right cycles between three TTYs, tty1 being the default one that appears under the boot messages.
man 2 setsid says that there is only one failure possibility:
EPERM The process group ID of any process equals the PID of the calling process. Thus, in particular, setsid() fails if the calling process is already a process group leader.
We can get some visibility into it to try and solve the problem with:
/psa.sh
Take the command described at TTY and try adding the following:
-
-e 'console=tty7': boot messages still show on/dev/tty1(TODO how to change that?), but we don’t get a shell at the end of boot there.Instead, the shell appears on
/dev/tty7. -
-e 'console=tty2'like/dev/tty7, but/dev/tty2is broken, because we have two shells there:-
one due to the
::respawn:-/bin/shentry which uses whateverconsolepoints to -
another one due to the
tty2::respawn:/sbin/gettyentry we added
-
-
-e 'console=ttyS0'much liketty2, but messages show only on serial, and the terminal is broken due to having multiple shells on it -
-e 'console=tty1 console=ttyS0': boot messages show on bothtty1andttyS0, but onlyS0gets a shell because it came last
If you run in Graphic mode, then you get a Penguin image for every core above the console! https://askubuntu.com/questions/80938/is-it-possible-to-get-the-tux-logo-on-the-text-based-boot
This is due to the CONFIG_LOGO=y option which we enable by default.
reset on the terminal then kills the poor penguins.
When CONFIG_LOGO=y is set, the logo can be disabled at boot with:
./run -e 'logo.nologo'
Looks like a recompile is needed to modify the image…
DRM / DRI is the new interface that supersedes fbdev:
./build -B 'BR2_PACKAGE_LIBDRM=y' -k ./run -F '/libdrm_modeset.out' -x
Outcome: for a few seconds, the screen that contains the terminal gets taken over by changing colors of the rainbow.
TODO not working for aarch64, it takes over the screen for a few seconds and the kernel messages disappear, but the screen stays black all the time.
./build -B 'BR2_PACKAGE_LIBDRM=y' -k ./run -F '/libdrm_modeset.out' -x
kmscube however worked, which means that it must be a bug with this demo?
We set CONFIG_DRM=y on our default kernel configuration, and it creates one device file for each display:
# ls -l /dev/dri total 0 crw------- 1 root root 226, 0 May 28 09:41 card0 # grep 226 /proc/devices 226 drm # ls /sys/module/drm /sys/module/drm_kms_helper/
Try creating new displays:
./run -aA -x -- -device virtio-gpu-pci
to see multiple /dev/dri/cardN, and then use a different display with:
./run -F '/libdrm_modeset.out' -x
Bibliography:
Tested on: 93e383902ebcc03d8a7ac0d65961c0e62af9612b
./build -b br2/kmscube
Outcome: a colored spinning cube coded in OpenGL + EGL takes over your display and spins forever: https://www.youtube.com/watch?v=CqgJMgfxjsk
It is a bit amusing to see OpenGL running outside of a window manager window like that: https://stackoverflow.com/questions/3804065/using-opengl-without-a-window-manager-in-linux/50669152#50669152
TODO: it is very slow, about 1FPS. I tried Buildroot master ad684c20d146b220dd04a85dbf2533c69ec8ee52 with:
make qemu_x86_64_defconfig printf " BR2_CCACHE=y BR2_PACKAGE_HOST_QEMU=y BR2_PACKAGE_HOST_QEMU_LINUX_USER_MODE=n BR2_PACKAGE_HOST_QEMU_SYSTEM_MODE=y BR2_PACKAGE_HOST_QEMU_VDE2=y BR2_PACKAGE_KMSCUBE=y BR2_PACKAGE_MESA3D=y BR2_PACKAGE_MESA3D_DRI_DRIVER_SWRAST=y BR2_PACKAGE_MESA3D_OPENGL_EGL=y BR2_PACKAGE_MESA3D_OPENGL_ES=y BR2_TOOLCHAIN_BUILDROOT_CXX=y " >> .config
and the FPS was much better, I estimate something like 15FPS.
On Ubuntu 18.04 with NVIDIA proprietary drivers:
sudo apt-get instll kmscube kmscube
fails with:
drmModeGetResources failed: Invalid argument failed to initialize legacy DRM
See also: robclark/kmscube#12 and https://stackoverflow.com/questions/26920835/can-egl-application-run-in-console-mode/26921287#26921287
Tested on: 2903771275372ccfecc2b025edbb0d04c4016930
TODO get working.
Implements a console for DRM.
The upstream project seems dead with last commit in 2014: https://www.freedesktop.org/wiki/Software/kmscon/
Build failed in Ubuntu 18.04 with: dvdhrm/kmscon#131 but this fork compiled but didn’t run on host: Aetf/kmscon#2 (comment)
Haven’t tested the fork on QEMU too much insanity.
TODO get working.
Looks like a more raw alternative to libdrm:
./build -B 'BR2_PACKABE_LIBDRI2=y' wget -O kernel_module/user/dri2test.c https://raw.githubusercontent.com/robclark/libdri2/master/test/dri2test.c ./build -k
but then I noticed that that example requires multiple files, and I don’t feel like integrating it into our build.
When I build it on Ubuntu 18.04 host, it does not generate any executable, so I’m confused.
Linux Test Project
C userland test suite.
Buildroot already has a package, so it is trivial to build it:
./build -B 'BR2_PACKAGE_LTP_TESTSUITE=y'
Then try it out with:
cd /usr/lib/ltp-testsuite/testcases ./bin/write01
There is a main executable execltp to run everything, but it depends on Python, so let’s just run them manually.
TODO a large chunk of tests, the Open POSIX test suite, is disabled with a comment on Buildroot master saying build failed: https://github.com/buildroot/buildroot/blob/3f37dd7c3b5eb25a41edc6f72ba73e5a21b07e9b/package/ltp-testsuite/ltp-testsuite.mk#L13 However, both tickets mentioned there were closed, so we should try it out and patch Buildroot if it works now.
POSIX userland stress. Two versions:
./build -B 'BR2_PACKAGE_STRESS=y' ./build -B 'BR2_PACKAGE_STRESS_NG=y'
Websites:
Likely the NG one is best, but it requires BR2_TOOLCHAIN_USES_GLIBC=y which we don’t have currently because we use uclibc… arghhhh.
stress usage:
stress --help stress -c 16 & ps
and notice how 16 threads were created in addition to a parent worker thread.
It just runs forever, so kill it when you get tired:
kill %1
stress -c 1 -t 1 makes gem5 irresponsive for a very long time.
Some QEMU specific features to play with and limitations to cry over.
QEMU allows us to take snapshots at any time through the monitor.
You can then restore CPU, memory and disk state back at any time.
qcow2 filesystems must be used for that to work.
To test it out, login into the VM with and run:
./run -F 'umount /mnt/9p /mnt/out'
and run:
/count.sh
On another shell, take a snapshot:
./qemumonitor savevm my_snap_id
The counting continues.
Restore the snapshot:
./qemumonitor loadvm my_snap_id
and the counting goes back to where we saved. This shows that CPU and memory states were reverted.
The umount is needed because snapshotting conflicts with 9P, which we felt is a more valuable default. If you forget to unmount, the following error appears on the QEMU monitor:
Migration is disabled when VirtFS export path '/linux-kernel-module-cheat/out/x86_64/buildroot/build' is mounted in the guest using mount_tag 'host_out'
We can also verify that the disk state is also reversed. Guest:
echo 0 >f
Monitor:
./qemumonitor savevm my_snap_id
Guest:
echo 1 >f
Monitor:
./qemumonitor loadvm my_snap_id
Guest:
cat f
And the output is 0.
Our setup does not allow for snapshotting while using initrd.
Bibliography: https://stackoverflow.com/questions/40227651/does-qemu-emulator-have-checkpoint-function/48724371#48724371
Snapshots are stored inside the .qcow2 images themselves.
They can be observed with:
./out/x86_64/buildroot/host/bin/qemu-img info out/x86_64/buildroot/images/rootfs.ext2.qcow2
which after savevm my_snap_id and savevm asdf contains an output of type:
image: out/x86_64/buildroot/images/rootfs.ext2.qcow2
file format: qcow2
virtual size: 512M (536870912 bytes)
disk size: 180M
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 my_snap_id 47M 2018-04-27 21:17:50 00:00:15.251
2 asdf 47M 2018-04-27 21:20:39 00:00:18.583
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
As a consequence:
-
it is possible to restore snapshots across boots, since they stay on the same image the entire time
-
it is not possible to use snapshots with initrd in our setup, since we don’t pass
-driveat all when initrd is enabled
This section documents:
-
how to interact with peripheral hardware device models through device drivers
-
how to write your own hardware device models for our emulators, see also: https://stackoverflow.com/questions/28315265/how-to-add-a-new-device-in-qemu-source-code
For the more complex interfaces, we focus on simplified educational devices, either:
-
present in the QEMU upstream:
-
added in our fork of QEMU:
Only tested in x86.
PCI driver for our minimal pci_min.c QEMU fork device:
insmod /pci_min.ko
Source:
-
Kernel module: kernel_module/pci_min.c.
-
QEMU device: https://github.com/cirosantilli/qemu/blob/lkmc/hw/misc/lkmc_pci_min.c
Outcome:
<4>[ 10.608241] pci_min: loading out-of-tree module taints kernel. <6>[ 10.609935] probe <6>[ 10.651881] dev->irq = 11 lkmc_pci_min mmio_write addr = 0 val = 12345678 size = 4 <6>[ 10.668515] irq_handler irq = 11 dev = 251 lkmc_pci_min mmio_write addr = 4 val = 0 size = 4
What happened:
-
right at probe time, we write to a register
-
our hardware model is coded such that it generates an interrupt when written to
-
the Linux kernel interrupt handler write to another register, which tells the hardware to stop sending interrupts
Kernel messages and printks from inside QEMU are shown all together, to see that more clearly, run in Graphic mode instead.
Works because we add to our default QEMU CLI:
-device lkmc_pci_min
Probe already does a MMIO write, which generates an IRQ and tests everything.
Small upstream educational PCI device:
/qemu_edu.sh
This tests a lot of features of the edu device, to understand the results, compare the inputs with the documentation of the hardware: https://github.com/qemu/qemu/blob/v2.12.0/docs/specs/edu.txt
Sources:
-
kernel module: kernel_module/qemu_edu.c
-
QEMU device: https://github.com/qemu/qemu/blob/v2.12.0/hw/misc/edu.c
-
test script: rootfs_overlay/qemu_edu.sh
Works because we add to our default QEMU CLI:
-device edu
This example uses:
-
the QEMU
edueducational device, which is a minimal educational in-tree PCI example -
out
/pci.kokernel module, which exercises theeduhardware.I’ve contacted the awesome original author author of
eduJiri Slaby, and he told there is no official kernel module example because this was created for a kernel module university course that he gives, and he didn’t want to give away answers. I don’t agree with that philosophy, so students, cheat away with this repo and go make startups instead.
TODO exercise DMA on the kernel module. The edu hardware model has that feature:
In this section we will try to interact with PCI devices directly from userland without kernel modules.
First identify the PCI device with:
lspci
In our case for example, we see:
00:06.0 Unclassified device [00ff]: Device 1234:11e8 (rev 10) 00:07.0 Unclassified device [00ff]: Device 1234:11e9
which we identify as being edu and pci_min respectively by the magic numbers: 1234:11e?
Alternatively, we can also do use the QEMU monitor:
./qemumonitor info qtree
which gives:
dev: lkmc_pci_min, id ""
addr = 07.0
romfile = ""
rombar = 1 (0x1)
multifunction = false
command_serr_enable = true
x-pcie-lnksta-dllla = true
x-pcie-extcap-init = true
class Class 00ff, addr 00:07.0, pci id 1234:11e9 (sub 1af4:1100)
bar 0: mem at 0xfeb54000 [0xfeb54007]
dev: edu, id ""
addr = 06.0
romfile = ""
rombar = 1 (0x1)
multifunction = false
command_serr_enable = true
x-pcie-lnksta-dllla = true
x-pcie-extcap-init = true
class Class 00ff, addr 00:06.0, pci id 1234:11e8 (sub 1af4:1100)
bar 0: mem at 0xfea00000 [0xfeafffff]
Read the configuration registers as binary:
hexdump /sys/bus/pci/devices/0000:00:06.0/config
Get nice human readable names and offsets of the registers and some enums:
setpci --dumpregs
Get the values of a given config register from its human readable name, either with either bus or device id:
setpci -s 0000:00:06.0 BASE_ADDRESS_0 setpci -d 1234:11e9 BASE_ADDRESS_0
Note however that BASE_ADDRESS_0 also appears when you do:
lspci -v
as:
Memory at feb54000
Then you can try messing with that address with [dev-mem]:
devmem 0xfeb54000 w 0x12345678
which writes to the first register of our pci_min device.
The device then fires an interrupt at irq 11, which is unhandled, which leads the kernel to say you are a bad boy:
lkmc_pci_min mmio_write addr = 0 val = 12345678 size = 4 <5>[ 1064.042435] random: crng init done <3>[ 1065.567742] irq 11: nobody cared (try booting with the "irqpoll" option)
followed by a trace.
Next, also try using our irq.ko IRQ monitoring module before triggering the interrupt:
insmod /irq.ko devmem 0xfeb54000 w 0x12345678
Our kernel module handles the interrupt, but does not acknowledge it like our proper pci_min kernel module, and so it keeps firing, which leads to infinitely many messages being printed:
handler irq = 11 dev = 251
There are two versions of setpci and lspci:
-
a simple one from BusyBox
-
a more complete one from pciutils which Buildroot has a package for, and is the default on Ubuntu 18.04 host. This is the one we enable by default.
The PCI standard is non-free, obviously like everything in low level: https://pcisig.com/specifications but Google gives several illegal PDF hits :-)
And of course, the best documentation available is: http://wiki.osdev.org/PCI
Like every other hardware, we could interact with PCI on x86 using only IO instructions and memory operations.
But PCI is a complex communication protocol that the Linux kernel implements beautifully for us, so let’s use the kernel API.
Bibliography:
-
edu device source and spec in QEMU tree:
-
http://www.zarb.org/~trem/kernel/pci/pci-driver.c inb outb runnable example (no device)
-
LDD3 PCI chapter
-
another QEMU device + module, but using a custom QEMU device:
-
https://is.muni.cz/el/1433/podzim2016/PB173/um/65218991/ course given by the creator of the edu device. In Czech, and only describes API
lspci -k shows something like:
00:04.0 Class 00ff: 1234:11e8 lkmc_pci
Meaning of the first numbers:
<8:bus>:<5:device>.<3:function>
Often abbreviated to BDF.
-
bus: groups PCI slots
-
device: maps to one slot
-
function: https://stackoverflow.com/questions/19223394/what-is-the-function-number-in-pci/44735372#44735372
Sometimes a fourth number is also added, e.g.:
0000:00:04.0
TODO is that the domain?
Class: pure magic: https://www-s.acm.illinois.edu/sigops/2007/roll_your_own/7.c.1.html TODO: does it have any side effects? Set in the edu device at:
k->class_id = PCI_CLASS_OTHERS
Each PCI device has 6 BAR IOs (base address register) as per the PCI spec.
Each BAR corresponds to an address range that can be used to communicate with the PCI.
Each BAR is of one of the two types:
-
IORESOURCE_IO: must be accessed withinXandoutX -
IORESOURCE_MEM: must be accessed withioreadXandiowriteX. This is the saner method apparently, and what the edu device uses.
The length of each region is defined by the hardware, and communicated to software via the configuration registers.
The Linux kernel automatically parses the 64 bytes of standardized configuration registers for us.
QEMU devices register those regions with:
memory_region_init_io(&edu->mmio, OBJECT(edu), &edu_mmio_ops, edu,
"edu-mmio", 1 << 20);
pci_register_bar(pdev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &edu->mmio);
TODO: broken. Was working before we moved arm from -M versatilepb to -M virt around af210a76711b7fa4554dcc2abd0ddacfc810dfd4. Either make it work on -M virt if that is possible, or document precisely how to make it work with versatilepb, or hopefully vexpress which is newer.
QEMU does not have a very nice mechanism to observe GPIO activity: https://raspberrypi.stackexchange.com/questions/56373/is-it-possible-to-get-the-state-of-the-leds-and-gpios-in-a-qemu-emulation-like-t/69267#69267
The best you can do is to hack our build script to add:
HOST_QEMU_OPTS='--extra-cflags=-DDEBUG_PL061=1'
where PL061 is the dominating ARM Holdings hardware that handles GPIO.
Then compile with:
./build -aa -b br2/gpio -c kernel_config_fragment/gpio -l
then test it out with:
/gpio.sh
Source: rootfs_overlay/gpio.sh
Buildroot’s Linux tools package provides some GPIO CLI tools: lsgpio, gpio-event-mon, gpio-hammer, TODO document them here.
Those broke MIPS build in 2017-02: https://bugs.busybox.net/show_bug.cgi?id=10276 and so we force disable them in our MIPS build currently.
TODO: broken when arm moved to -M virt, same as GPIO.
Hack QEMU’s hw/misc/arm_sysctl.c with a printf:
static void arm_sysctl_write(void *opaque, hwaddr offset,
uint64_t val, unsigned size)
{
arm_sysctl_state *s = (arm_sysctl_state *)opaque;
switch (offset) {
case 0x08: /* LED */
printf("LED val = %llx\n", (unsigned long long)val);
and then rebuild with:
./build -aa -c kernel_config_fragment/leds -lq
But beware that one of the LEDs has a heartbeat trigger by default (specified on dts), so it will produce a lot of output.
And then activate it with:
cd /sys/class/leds/versatile:0 cat max_brightness echo 255 >brightness
Relevant QEMU files:
-
hw/arm/versatilepb.c -
hw/misc/arm_sysctl.c
Relevant kernel files:
-
arch/arm/boot/dts/versatile-pb.dts -
drivers/leds/led-class.c -
drivers/leds/leds-sysctl.c
Minimal platform device example coded into the -M versatilepb SoC of our QEMU fork.
Using this device now requires checking out to the branch:
git checkout platform-device
before building, it does not work on master.
The module itself can be found at: https://github.com/cirosantilli/linux-kernel-module-cheat/blob/platform-device/kernel_module/platform_device.c
Rationale: we found out that the kernels that build for qemu -M versatilepb don’t work on gem5 because versatilepb is an old pre-v7 platform, and gem5 requires armv7.
At the same time, we also found out that Versatile Express (vexpress) does support armv7, so maybe we could port it over, but I had lost interest at that point, and decided to just go with the simpler -M virt machine instead.
Uses:
-
hw/misc/lkmc_platform_device.cminimal device added in our QEMU fork to-M versatilepb -
the device tree entry we added to our Linux kernel fork: https://github.com/cirosantilli/linux/blob/361bb623671a52a36a077a6dd45843389a687a33/arch/arm/boot/dts/versatile-pb.dts#L42
Expected outcome after insmod:
-
QEMU reports MMIO with printfs
-
IRQs are generated and handled by this module, which logs to dmesg
Without insmoding this module, try writing to the register with [dev-mem]:
devmem 0x101e9000 w 0x12345678
We can also observe the interrupt with dummy-irq:
modprobe dummy-irq irq=34 insmod /platform_device.ko
The IRQ number 34 was found by on the dmesg after:
insmod /platform_device.ko
This protocol allows sharing a mountable filesystem between guest and host.
With networking, it’s boring, we can just use any of the old tools like sshfs and NFS.
One advantage of this method over NFS is that can run without sudo on host, or having to pass host credentials on guest for sshfs.
TODO performance compared to NFS.
As usual, we have already set everything up for you. On host:
cd data/9p uname -a > host
Guest:
cd /mnt/9p cat host uname -a > guest
Host:
cat guest
The main ingredients for this are:
-
9Psettings in our kernel configs -
9pentry on our rootfs_overlay/etc/fstabAlternatively, you could also mount your own with:
mkdir /mnt/my9p mount -t 9p -o trans=virtio,version=9p2000.L host0 /mnt/my9p
-
Launch QEMU with
-virtfsas in your run scriptWhen we tried:
security_model=mapped
writes from guest failed due to user mismatch problems: https://serverfault.com/questions/342801/read-write-access-for-passthrough-9p-filesystems-with-libvirt-qemu
Bibliography:
Seems possible! Lets do it:
It would be uber awesome if we could overlay a 9p filesystem on top of the root.
That would allow us to have a second Buildroot target/ directory, and without any extra configs, keep the root filesystem image small, which implies:
-
less host disk usage, no need to copy the entire
target/to the image again -
faster rebuild turnaround:
-
no need to regenerate the root filesystem at all and reboot
-
overcomes the
check_bin_archproblem: Buildroot rebuild is slow when the root filesystem is large
-
-
no need to worry about BR2_TARGET_ROOTFS_EXT2_SIZE
But TODO we didn’t get it working yet:
Test with the script:
/overlayfs.sh
Source: rootfs_overlay/overlayfs.sh
It shows that files from the upper/ does not show on the root.
Furthermore, if you try to mount the root elsewhere to prepare for a chroot:
/overlayfs.sh / /overlay # chroot /overlay
it does not work well either because sub filesystems like /proc do not show on the mount:
ls /overlay/proc
A less good alternative is to set LD_LIBRARY_PATH on the 9p mount and run executables directly from the mount.
Even mor awesome than chroot be to pivot_root, but I couldn’t get that working either:
First ensure that networking is enabled before trying out anything in this section: Networking
Guest, BusyBox nc enabled with CONFIG_NC=y:
nc -l -p 45455
Host, nc from the netcat-openbsd package:
echo asdf | nc localhost 45455
Then asdf appears on the guest.
Only this specific port works by default since we have forwarded it on the QEMU command line.
We us this exact procedure to connect to gdbserver.
Not enabled by default due to the build / runtime overhead. To enable, build with:
./build -B 'BR2_PACKAGE_OPENSSH=y'
Then inside the guest turn on sshd:
/sshd.sh
Source: rootfs_overlay/sshd.sh
And finally on host:
ssh root@localhost -p 45456
Could not do port forwarding from host to guest, and therefore could not use gdbserver: https://stackoverflow.com/questions/48941494/how-to-do-port-forwarding-from-guest-to-host-in-gem5
TODO. There is guestfwd, which sounds analogous to hostwfd used in the other sense, but I was not able to get it working, e.g.:
-netdev user,hostfwd=tcp::45455-:45455,guestfwd=tcp::45456-,id=net0 \
gives:
Could not open guest forwarding device 'guestfwd.tcp.45456'
A simpler and possibly less overhead alternative to 9P would be to generate a secondary disk image with the bencmark you want to rebuild.
Then you can umount and re-mount on guest without reboot.
We don’t support this yet, but it should not be too hard to hack it up, maybe by hooking into rootfs_post_build_script.
This has nothing to do with the Linux kernel, but it is cool:
sudo apt-get install qemu-user ./build -a arm cd out/arm/buildroot/target qemu-arm -L . bin/ls
This uses QEMU’s user-mode emulation mode that allows us to run cross-compiled userland programs directly on the host.
The reason this is cool, is that ls is not statically compiled, but since we have the Buildroot image, we are still able to find the shared linker and the shared library at the given path.
In other words, much cooler than:
./out/arm/buildroot/host/bin/arm-linux-gcc -static ./kernel_module/user/hello.c qemu-arm a.out
It is also possible to compile QEMU user mode from source with BR2_PACKAGE_HOST_QEMU_LINUX_USER_MODE=y, but then your compilation will likely fail with:
package/qemu/qemu.mk:110: *** "Refusing to build qemu-user: target Linux version newer than host's.". Stop.
since we are using a bleeding edge kernel, which is a sanity check in the Buildroot QEMU package.
Anyways, this warns us that the userland emulation will likely not be reliable, which is good to know. TODO: where is it documented the host kernel must be as new as the target one?
GDB step debugging is also possible with:
cd out/arm/buildroot/target qemu-arm -g 1234 -L . ../build/kernel_module-1.0/user/myinsmod.out ../host/usr/bin/arm-buildroot-linux-uclibcgnueabihf-gdb \ --nh \ -ex 'set architecture arm' \ -ex 'set sysroot .' \ -ex 'file ../build/kernel_module-1.0/user/myinsmod.out' \ -ex 'target remote localhost:1234' \ -ex 'break main' \ -ex 'continue' \ -ex 'layout split' \ ;
crosstool-ng tests show that QEMU also has a runtime check for the kernel version which can fail as:
FATAL: kernel too old
but it must be using the kernel version given by glibc, since we didn’t hit that error on uclibc.
Analogous to QEMU user mode, but less usable.
First we try some -static sanity checks.
Works and prints hello:
./out/arm/buildroot/host/bin/arm-linux-gcc -static ./kernel_module/user/hello.c ./out/common/gem5/build/X86/gem5.opt ./gem5/gem5/configs/example/se.py -c ./a.out ./out/arm/buildroot/host/bin/arm-linux-gcc -static ./kernel_module/user/hello.c ./out/common/gem5/build/ARM/gem5.opt ./gem5/gem5/configs/example/se.py -c ./a.out ./out/aarch64/buildroot/host/bin/aarch64-linux-gcc -static ./kernel_module/user/hello.c ./out/common/gem5/build/ARM/gem5.opt ./gem5/gem5/configs/example/se.py -c ./a.out
But I think this is unreliable, and only works because we are using uclibc which does not check the kernel version as glibc does: https://stackoverflow.com/questions/48959349/how-to-solve-fatal-kernel-too-old-when-running-gem5-in-syscall-emulation-se-m/50542301#50542301
Ignoring that insanity, we then try it with dynamically linked executables:
./out/common/gem5/build/X86/gem5.opt ./gem5/gem5/configs/example/se.py -c ./out/x86_64/buildroot/target/hello.out ./out/common/gem5/build/ARM/gem5.opt ./gem5/gem5/configs/example/se.py -c ./out/arm/buildroot/target/hello.out ./out/common/gem5/build/ARM/gem5.opt ./gem5/gem5/configs/example/se.py -c ./out/aarch64/buildroot/target/hello.out
But they fail with:
fatal: Unable to open dynamic executable's interpreter.
and cd ./out/aarch64/buildroot/target did not help: https://stackoverflow.com/questions/50542222/how-to-run-a-dynamically-linked-executable-syscall-emulation-mode-se-py-in-gem5
The current FAQ says it is not possible to use dynamic executables: http://gem5.org/Frequently_Asked_Questions but I don’t trust it, and then these presentations mention it:
but I could not find how to actually use it.
Let’s see if user mode runs considerably faster than full system or not.
gem5 user mode:
make \ -C out/arm/buildroot/build/dhrystone-2 \ CC="$(pwd)/out/arm/buildroot/host/usr/bin/arm-buildroot-linux-uclibcgnueabihf-gcc" \ CFLAGS=-static \ ; time ./out/common/gem5/build/ARM/gem5.opt \ ./gem5/gem5/configs/example/se.py \ -c out/arm/buildroot/build/dhrystone-2/dhrystone \ -o 100000 \ ;
gem5 full system:
printf 'm5 exit' > data/readfile ./run -aa -g -F '/gem5.sh' printf 'm5 resetstats;dhrystone 100000;m5 exit' > data/readfile time ./run -aa -gu -- -r 1
QEMU user mode:
time qemu-arm out/arm/buildroot/build/dhrystone-2/dhrystone 100000000
QEMU full system:
time ./run -aa -F 'time dhrystone 100000000;/poweroff.out'
Result on P51 at bad30f513c46c1b0995d3a10c0d9bc2a33dc4fa0:
-
gem5 user: 33 seconds
-
gem5 full system: 51 seconds
-
QEMU user: 45 seconds
-
QEMU full system: 223 seconds
The QEMU monitor is a terminal that allows you to send text commands to the QEMU VM: https://en.wikibooks.org/wiki/QEMU/Monitor
Accessed it in either Text mode and Graphic mode:
./qemumonitor
or send one command such as info qtree and quit the monitor:
./qemumonitor info qtree
Source: qemumonitor
qemumonitor uses the -monitor QEMU command line option, which makes the monitor listen from a socket.
qemumonitor does not support input from an stdin pipe currently, see comments on the source for rationale.
Alternatively, from text mode:
Ctrl-A C
and go back to the terminal with:
Ctrl-A C
And in graphic mode from the GUI:
Ctrl-Alt ?
where ? is a digit 1, or 2, or, 3, etc. depending on what else is available on the GUI: serial, parallel and frame buffer.
In general, ./qemumonitor is the best option, as it:
-
works on both modes
-
allows to use the host Bash history to re-run one off commands
-
allows you to search the output of commands on your host shell even when in graphic mode
Getting everything to work required careful choice of QEMU command line options:
When you start hacking QEMU or gem5, it is useful to see what is going on inside the emulator themselves.
This is of course trivial since they are just regular userland programs on the host, but we make it a bit easier with:
./run -D
Then you could:
b edu_mmio_read c
And in QEMU:
/qemu_edu.sh
When in non graphic mode, using -D makes Ctrl-C not get passed to the QEMU guest anymore: it is instead captured by GDB itself, so allow breaking. So e.g. you won’t be able to easily quit from a guest progra like:
sleep 10
In graphic mode, make sure that you never click inside the QEMU graphic while debugging, otherwise you mouse gets captured forever, and the only solution I can find is to go to a TTY with Ctrl-Alt-F1 and kill QEMU.
You can still send key presses to QEMU however even without the mouse capture, just either click on the title bar, or alt tab to give it focus.
QEMU can log several different events.
The most interesting are events which show instructions that QEMU ran, for which we have a helper:
./trace-boot -a x86_64
You can then inspect the instructions with:
less ./out/x86_64/qemu/0/trace.txt
Get the list of available trace events:
./run -T help
Enable other specific trace events:
./run -T trace1,trace2 ./qemu-trace2txt -a "$arch" less ./out/x86_64/qemu/0/trace.txt
This functionality relies on the following setup:
-
./configure --enable-trace-backends=simple. This logs in a binary format to the trace file.It makes 3x execution faster than the default trace backend which logs human readable data to stdout.
Logging with the default backend
loggreatly slows down the CPU, and in particular leads to this boot message:All QSes seen, last rcu_sched kthread activity 5252 (4294901421-4294896169), jiffies_till_next_fqs=1, root ->qsmask 0x0 swapper/0 R running task 0 1 0 0x00000008 ffff880007c03ef8 ffffffff8107aa5d ffff880007c16b40 ffffffff81a3b100 ffff880007c03f60 ffffffff810a41d1 0000000000000000 0000000007c03f20 fffffffffffffedc 0000000000000004 fffffffffffffedc ffffffff00000000 Call Trace: <IRQ> [<ffffffff8107aa5d>] sched_show_task+0xcd/0x130 [<ffffffff810a41d1>] rcu_check_callbacks+0x871/0x880 [<ffffffff810a799f>] update_process_times+0x2f/0x60
in which the boot appears to hang for a considerable time.
-
patch QEMU source to remove the
disablefromexec_tbin thetrace-eventsfile. See also: https://rwmj.wordpress.com/2016/03/17/tracing-qemu-guest-execution/
We can further use Binutils' addr2line to get the line that corresponds to each address:
./trace-boot -a x86_64 && ./trace2line -a x86_64 less ./out/x86_64/qemu/0/trace-lines.txt
The format is as follows:
39368 _static_cpu_has arch/x86/include/asm/cpufeature.h:148
Where:
-
39368: number of consecutive times that a line ran. Makes the output much shorter and more meaningful -
_static_cpu_has: name of the function that contains the line -
arch/x86/include/asm/cpufeature.h:148: file and line
This could of course all be done with GDB, but it would likely be too slow to be practical.
TODO do even more awesome offline post-mortem analysis things, such as:
-
detect if we are in userspace or kernelspace. Should be a simple matter of reading the
-
read kernel data structures, and determine the current thread. Maybe we can reuse / extend the kernel’s GDB Python scripts??
QEMU supports deterministic record and replay by saving external inputs, which would be awesome to understand the kernel, as you would be able to examine a single run as many times as you would like.
This mechanism first requires a trace to be generated on an initial record run. The trace is then used on the replay runs to make them deterministic.
Unfortunately it is not working in the current QEMU: https://stackoverflow.com/questions/46970215/how-to-use-qemus-deterministic-record-and-replay-feature-for-a-linux-kernel-boo
Patches were merged in post v2.12.0-rc2 but it crashed for me and I opened a minimized bug report: https://bugs.launchpad.net/qemu/+bug/1762179
We don’t expose record and replay on our scripts yet since it was was not very stable, but we will do so when it stabilizes.
rand_check.out is a good way to test out if record and replay is actually deterministic.
Alternatively, mozilla/rr claims it is able to run QEMU: but using it would require you to step through QEMU code itself. Likely doable, but do you really want to?
TODO: is there any way to distinguish which instruction runs on each core? Doing:
./run -a x86_64 -c 2 -E '/poweroff.out' -T exec_tb ./qemu-trace2txt
just appears to output both cores intertwined without any clear differentiation.
TODO: is is possible to show which instructions ran at each point in time, in addition to the address of the instruction with exec_tb shows? Hopefully dissembled, not just the instruction memory.
PANDA can list memory addresses, so I bet it can also decode the instructions: https://github.com/panda-re/panda/blob/883c85fa35f35e84a323ed3d464ff40030f06bd6/panda/docs/LINE_Censorship.md I wonder why they don’t just upstream those things to QEMU’s tracing: panda-re/panda#290
Memory access on vanilla seem impossible due to optimizations that QEMU does:
gem5 unlike QEMU is deterministic by default without needing to replay traces
But it also provides a tracing mechanism documented at: http://www.gem5.org/Trace_Based_Debugging to allow easily inspecting certain aspects of the system:
./run -a aarch64 -E 'm5 exit' -g -T Exec less out/aarch64/gem5/default/0/m5out/trace.txt
List all available debug flags:
./run -a aarch64 -G --debug-help -g
but to understand most of them you have to look at the source code:
less gem5/gem5/src/cpu/SConscript less gem5/gem5/src/cpu/exetrace.cc
As can be seen on the Sconstruct, Exec is just an alias that enables a set of flags.
Be warned, the trace is humongous, at 16Gb.
We can make the trace smaller by naming the trace file as trace.txt.gz, which enables GZIP compression, but that is not currently exposed on our scripts, since you usually just need something human readable to work on.
Enabling tracing made the runtime about 4x slower on the P51, with or without .gz compression.
The output format is of type:
25007000: system.cpu T0 : @start_kernel : stp 25007000: system.cpu T0 : @start_kernel.0 : addxi_uop ureg0, sp, #-112 : IntAlu : D=0xffffff8008913f90 25007500: system.cpu T0 : @start_kernel.1 : strxi_uop x29, [ureg0] : MemWrite : D=0x0000000000000000 A=0xffffff8008913f90 25008000: system.cpu T0 : @start_kernel.2 : strxi_uop x30, [ureg0, #8] : MemWrite : D=0x0000000000000000 A=0xffffff8008913f98 25008500: system.cpu T0 : @start_kernel.3 : addxi_uop sp, ureg0, #0 : IntAlu : D=0xffffff8008913f90
There are two types of lines:
-
full instructions, as the first line. Only shown if the
ExecMacroflag is given. -
micro ops that constitute the instruction, the lines that follow. Yes,
aarch64also has microops: https://superuser.com/questions/934752/do-arm-processors-like-cortex-a9-use-microcode/934755#934755. Only shown if theExecMicroflag is given.
Breakdown:
-
25007500: time count in some unit. Note how the microops execute at further timestamps. -
system.cpu: distinguishes between CPUs when there are more than one -
T0: thread number. TODO: hyperthread? How to play with it? -
@start_kernel: we are in thestart_kernelfunction. Awesome feature! Implemented with libelf https://sourceforge.net/projects/elftoolchain/ copy pasted in-treeext/libelf. To get raw addresses, remove theExecSymbol, which is enabled byExec. -
.1as in@start_kernel.1: index of the microop -
stp: instruction disassembly. Seems to use.isafiles dispersed per arch, which is an in house format: http://gem5.org/ISA_description_system -
strxi_uop x29, [ureg0]: microop disassembly. -
MemWrite : D=0x0000000000000000 A=0xffffff8008913f90: TODO. Further description of the microops.
Trace the source lines just like for QEMU with:
./trace-boot -a aarch64 -g && ./trace2line -a aarch64 -g less ./out/aarch64/gem5/trace-lines.txt
TODO: 7452d399290c9c1fc6366cdad129ef442f323564 ./trace2line this is too slow and takes hours. QEMU’s processing of 170k events takes 7 seconds. gem5’s processing is analogous, but there are 140M events, so it should take 7000 seconds ~ 2 hours which seems consistent with what I observe, so maybe there is no way to speed this up… The workaround is to just use gem5’s ExecSymbol to get function granularity, and then GDB individually if line detail is needed?
Sometimes in Ubuntu 14.04, after the QEMU SDL GUI starts, it does not get updated after keyboard strokes, and there are artifacts like disappearing text.
We have not managed to track this problem down yet, but the following workaround always works:
Ctrl-Shift-U Ctrl-C root
This started happening when we switched to building QEMU through Buildroot, and has not been observed on later Ubuntu.
Using text mode is another workaround if you don’t need GUI features.
gem5 is a system simulator, much like QEMU: http://gem5.org/
For the most part, just add the -g option to all commands and everything should magically work:
./configure -g && ./build -a arm -g && ./run -a arm -g
TODO aarch64 boot is failing on kernel v4.17, gem5 60600f09c25255b3c8f72da7fb49100e2682093a with:
panic: Tried to write Gic cpu at offset 0xd0
Work around it for now by using v4.16:
git -C linux checkout v4.16 ./configure -g && ./build -a arm -g -L v4.16 && ./run -a aarch64 -g -L v4.16 git -C linux checkout -
To get a terminal, open a new shell and run:
./gem5-shell
Tested architectures:
-
arm -
aarch64 -
x86_64
Like QEMU, gem5 also has an user mode called syscall emulation mode (SE): gem5 syscall emulation mode
-
advantages of gem5:
-
simulates a generic more realistic pipelined and optionally out of order CPU cycle by cycle, including a realistic DRAM memory access model with latencies, caches and page table manipulations. This allows us to:
-
do much more realistic performance benchmarking with it, which makes absolutely no sense in QEMU, which is purely functional
-
make certain functional observations that are not possible in QEMU, e.g.:
-
use Linux kernel APIs that flush cache memory like DMA, which are crucial for driver development. In QEMU, the driver would still work even if we forget to flush caches.
-
spectre / meltdown:
-
It is not of course truly cycle accurate, as that:
-
would require exposing proprietary information of the CPU designs: https://stackoverflow.com/questions/17454955/can-you-check-performance-of-a-program-running-with-qemu-simulator/33580850#33580850
-
would make the simulation even slower TODO confirm, by how much
but the approximation is reasonable.
It is used mostly for microarchitecture research purposes: when you are making a new chip technology, you don’t really need to specialize enormously to an existing microarchitecture, but rather develop something that will work with a wide range of future architectures.
-
-
runs are deterministic by default, unlike QEMU which has a special QEMU record and replay mode, that requires first playing the content once and then replaying
-
-
disadvantage of gem5: slower than QEMU, see: Benchmark Linux kernel boot
This implies that the user base is much smaller, since no Android devs.
Instead, we have only chip makers, who keep everything that really works closed, and researchers, who can’t version track or document code properly >:-) And this implies that:
-
the documentation is more scarce
-
it takes longer to support new hardware features
Well, not that AOSP is that much better anyways.
-
-
not sure: gem5 has BSD license while QEMU has GPL
This suits chip makers that want to distribute forks with secret IP to their customers.
On the other hand, the chip makers tend to upstream less, and the project becomes more crappy in average :-)
OK, this is why we used gem5 in the first place, performance measurements!
Let’s benchmark Dhrystone which Buildroot provides.
The most flexible way is to do:
arch=aarch64 # Generate a checkpoint after Linux boots. # The boot takes a while, be patient young Padawan. printf 'm5 exit' > data/readfile ./run -a "$arch" -g -F '/gem5.sh' # Restore the checkpoint, and run the benchmark with parameter 1.000. # We skip the boot completely, saving time! printf 'm5 resetstats;dhrystone 1000;m5 exit' > data/readfile ./run -a "$arch" -g -- -r 1 ./gem5-stat -a "$arch" # Now with another parameter 10.000. printf 'm5 resetstats;dhrystone 10000;m5 exit' > data/readfile ./run -a "$arch" -g -- -r 1 ./gem5-stat -a "$arch" # Get an interactive shell at the end of the restore. printf '' > data/readfile ./run -a "$arch" -g -- -r 1
These commands output the approximate number of CPU cycles it took Dhrystone to run.
For more serious tests, you will likely want to automate logging the commands ran and results to files, a good example is: gem5-bench-cache.
A more naive and simpler to understand approach would be a direct:
./run -a aarch64 -g -E 'm5 checkpoint;m5 resetstats;dhrystone 10000;m5 exit'
but the problem is that this method does not allow to easily run a different script without running the boot again, see: gem5 checkpoint restore and run a different script
A few imperfections of our benchmarking method are:
-
when we do
m5 resetstatsandm5 exit, there is some time passed before theexecsystem call returns and the actual benchmark starts and ends -
the benchmark outputs to stdout, which means so extra cycles in addition to the actual computation. But TODO: how to get the output to check that it is correct without such IO cycles?
Solutions to these problems include:
-
modify benchmark code with instrumentation directly, see m5ops instructions for an example.
-
monitor known addresses TODO possible? Create an example.
Those problems should be insignificant if the benchmark runs for long enough however.
Now you can play a fun little game with your friends:
-
pick a computational problem
-
make a program that solves the computation problem, and outputs output to stdout
-
write the code that runs the correct computation in the smallest number of cycles possible
To find out why your program is slow, a good first step is to have a look at the statistics for the run:
cat out/aarch64/gem5/default/0/m5out/stats.txt
Whenever we run m5 dumpstats or m5 exit, a section with the following format is added to that file:
---------- Begin Simulation Statistics ---------- [the stats] ---------- End Simulation Statistics ----------
Besides optimizing a program for a given CPU setup, chip developers can also do the inverse, and optimize the chip for a given benchmark!
The rabbit hole is likely deep, but let’s scratch a bit of the surface.
./run -a arm -c 2 -g
Check with:
cat /proc/cpuinfo getconf _NPROCESSORS_CONF
A quick ./run -g -- -h leads us to the options:
--caches --l1d_size=1024 --l1i_size=1024 --l2cache --l2_size=1024 --l3_size=1024
But keep in mind that it only affects benchmark performance of the most detailed CPU types:
| arch | CPU type | caches used |
|---|---|---|
X86 |
|
no |
X86 |
|
?* |
ARM |
|
no |
ARM |
|
yes |
*: couldn’t test because of:
Cache sizes can in theory be checked with the methods described at: https://superuser.com/questions/55776/finding-l2-cache-size-in-linux:
getconf -a | grep CACHE lscpu cat /sys/devices/system/cpu/cpu0/cache/index2/size
but for some reason the Linux kernel is not seeing the cache sizes:
Behaviour breakdown:
-
arm QEMU and gem5 (both
AtomicSimpleCPUorHPI), x86 gem5:/sysfiles don’t exist, andgetconfandlscpuvalue empty -
x86 QEMU:
/sysfiles exist, butgetconfandlscpuvalues still empty
So we take a performance measurement approach instead:
./gem5-bench-cache -a aarch64 cat out/aarch64/gem5/bench-cache.txt
which gives:
n 1000 cmd ./run -a arm -g -- -r 1 --caches --l2cache --l1d_size=1024 --l1i_size=1024 --l2_size=1024 --l3_size=1024 --cpu-type=HPI --restore-with-cpu=HPI time 24.71 exit_status 0 cycles 52386455 instructions 4555081 cmd ./run -a arm -g -- -r 1 --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB --cpu-type=HPI --restore-with-cpu=HPI time 17.44 exit_status 0 cycles 6683355 instructions 4466051 n 10000 cmd ./run -a arm -g -- -r 1 --caches --l2cache --l1d_size=1024 --l1i_size=1024 --l2_size=1024 --l3_size=1024 --cpu-type=HPI --restore-with-cpu=HPI time 52.90 exit_status 0 cycles 165704397 instructions 11531136 cmd ./run -a arm -g -- -r 1 --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB --cpu-type=HPI --restore-with-cpu=HPI time 36.19 exit_status 0 cycles 16182925 instructions 11422585 n 100000 cmd ./run -a arm -g -- -r 1 --caches --l2cache --l1d_size=1024 --l1i_size=1024 --l2_size=1024 --l3_size=1024 --cpu-type=HPI --restore-with-cpu=HPI time 325.09 exit_status 0 cycles 1295703657 instructions 81189411 cmd ./run -a arm -g -- -r 1 --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB --cpu-type=HPI --restore-with-cpu=HPI time 250.74 exit_status 0 cycles 110585681 instructions 80899588
We make the following conclusions:
-
the number of instructions almost does not change: the CPU is waiting for memory all the extra time. TODO: why does it change at all?
-
the wall clock execution time is not directionally proportional to the number of cycles: here we had a 10x cycle increase, but only 2x time increase. This suggests that the simulation of cycles in which the CPU is waiting for memory to come back is faster.
TODO These look promising:
--list-mem-types --mem-type=MEM_TYPE --mem-channels=MEM_CHANNELS --mem-ranks=MEM_RANKS --mem-size=MEM_SIZE
TODO: now to verify this with the Linux kernel? Besides raw performance benchmarks.
TODO These look promising:
--ethernet-linkspeed --ethernet-linkdelay
and also: gem5-dist: https://publish.illinois.edu/icsl-pdgem5/
Clock frequency: TODO how does it affect performance in benchmarks?
./run -a aarch64 -g -- --cpu-clock 10000000
Check with:
m5 resetstats && sleep 10 && m5 dumpstats
and then:
./gem5-stat -a aarch64
TODO: why doesn’t this exist:
ls /sys/devices/system/cpu/cpu0/cpufreq
Buildroot built-in libraries, mostly under Libraries > Other:
-
Armadillo
C++: linear algebra -
fftw: Fourier transform
-
Flann
-
GSL: various
-
liblinear
-
libspacialindex
-
libtommath
-
qhull
There are not yet enabled, but it should be easy to so, see: Add new Buildroot packages
Implemented by GCC itself, so just a toolchain configuration, no external libs, and we enable it by default:
/openmp.out
Source: kernel_module/user/openmp.c
Buildroot supports it, which makes everything just trivial:
./build -B 'BR2_PACKAGE_OPENBLAS=y' -k ./run -F '/openblas.out; echo $?'
Outcome: the test passes:
0
Source: kernel_module/user/openblas.c
The test performs a general matrix multiplication:
| 1.0 -3.0 | | 1.0 2.0 1.0 | | 0.5 0.5 0.5 | | 11.0 - 9.0 5.0 |
1 * | 2.0 4.0 | * | -3.0 4.0 -1.0 | + 2 * | 0.5 0.5 0.5 | = | - 9.0 21.0 -1.0 |
| 1.0 -1.0 | | 0.5 0.5 0.5 | | 5.0 - 1.0 3.0 |
This can be deduced from the Fortran interfaces at: out/x86_64/buildroot/build/openblas-*/reference/dgemmf.f, which we can map to our call as:
C := alpha*op( A )*op( B ) + beta*C, SUBROUTINE DGEMMF( TRANA, TRANB, M,N,K, ALPHA,A,LDA,B,LDB,BETA,C,LDC) cblas_dgemm( CblasColMajor, CblasNoTrans, CblasTrans,3,3,2 ,1, A,3, B,3, 2 ,C,3 );
Header only linear algebra library with a mainline Buildroot package:
./build -B 'BR2_PACKAGE_EIGEN=y' -k
Just create an array and print it:
./run -F '/eigen.out'
Output:
3 -1 2.5 1.5
Source: kernel_module/user/eigen.cpp
This example just creates a matrix and prints it out.
Tested on: a4bdcf102c068762bb1ef26c591fcf71e5907525
We have ported parts of the PARSEC benchmark for cross compilation at: https://github.com/cirosantilli/parsec-benchmark See the documentation on that repo to find out which benchmarks have been ported. Some of the benchmarks were are segfaulting, they are documented in that repo.
There are two ways to run PARSEC with this repo:
-
without
pasecmgmt, most likely what you want
configure -gpq && ./build -a arm -B 'BR2_PACKAGE_PARSEC_BENCHMARK=y' -g && ./run -a arm -g
Once inside the guest, launch one of the test input sized benchmarks manually as in:
cd /parsec/ext/splash2x/apps/fmm/run ../inst/arm-linux.gcc/bin/fmm 1 < input_1
To find run out how to run many of the benchmarks, have a look at the test.sh script of the parse-benchmark repo.
From the guest, you can also run it as:
cd /parsec ./test.sh
but this might be a bit time consuming in gem5.
Running a benchmark of a size different than test, e.g. simsmall, requires a rebuild with:
./build \ -a arm \ -B 'BR2_PACKAGE_PARSEC_BENCHMARK=y' \ -B 'BR2_PACKAGE_PARSEC_BENCHMARK_INPUT_SIZE="simsmall"' \ -g \ -- parsec-benchmark-reconfigure \ ;
Large input may also require tweaking:
-
BR2_TARGET_ROOTFS_EXT2_SIZE if the unpacked inputs are large
-
Memory size, unless you want to meet the OOM killer, which is admittedly kind of fun
test.sh only contains the run commands for the test size, and cannot be used for simsmall.
The easiest thing to do, is to scroll up on the host shell after the build, and look for a line of type:
Running /full/path/to/linux-kernel-module-cheat/out/aarch64/buildroot/build/parsec-benchmark-custom/ext/splash2x/apps/ocean_ncp/inst/aarch64-linux.gcc/bin/ocean_ncp -n2050 -p1 -e1e-07 -r20000 -t28800
and then tweak the command found in test.sh accordingly.
Yes, we do run the benchmarks on host just to unpack / generate inputs. They are expected fail to run since they were build for the guest instead of host, including for x86_64 guest which has a different interpreter than the host’s (see file myexecutable).
The rebuild is required because we unpack input files on the host.
Separating input sizes also allows to create smaller images when only running the smaller benchmarks.
This limitation exists because parsecmgmt generates the input files just before running via the Bash scripts, but we can’t run parsecmgmt on gem5 as it is too slow!
One option would be to do that inside the guest with QEMU.
Also, we can’t generate all input sizes at once, because many of them have the same name and would overwrite one another…
PARSEC simply wasn’t designed with non native machines in mind…
Most users won’t want to use this method because:
-
running the
parsecmgmtBash scripts takes forever before it ever starts running the actual benchmarks on gem5Running on QEMU is feasible, but not the main use case, since QEMU cannot be used for performance measurements
-
it requires putting the full
.tarinputs on the guest, which makes the image twice as large (1x for the.tar, 1x for the unpacked input files)
It would be awesome if it were possible to use this method, since this is what Parsec supports officially, and so:
-
you don’t have to dig into what raw command to run
-
there is an easy way to run all the benchmarks in one go to test them out
-
you can just run any of the benchmarks that you want
but it simply is not feasible in gem5 because it takes too long.
If you still want to run this, try it out with:
./build \ -a aarch64 \ -B 'BR2_PACKAGE_PARSEC_BENCHMARK=y' \ -B 'BR2_PACKAGE_PARSEC_BENCHMARK_PARSECMGMT=y' \ -B 'BR2_TARGET_ROOTFS_EXT2_SIZE="3G"' \ -g \ -- parsec-benchmark-reconfigure \ ;
And then you can run it just as you would on the host:
cd /parsec/ bash . env.sh parsecmgmt -a run -p splash2x.fmm -i test
If you want to remove PARSEC later, Buildroot doesn’t provide an automated package removal mechanism as documented at: https://github.com/buildroot/buildroot/blob/2017.08/docs/manual/rebuilding-packages.txt#L90, but the following procedure should be satisfactory:
rm -rf \ ./out/common/dl/parsec-* \ ./out/arm-gem5/buildroot/build/parsec-* \ ./out/arm-gem5/buildroot/build/packages-file-list.txt \ ./out/arm-gem5/buildroot/images/rootfs.* \ ./out/arm-gem5/buildroot/target/parsec-* \ ; ./build -a arm -g
If you end up going inside parsec-benchmark/parsec-benchmark to hack up the benchmark (you will!), these tips will be helpful.
Buildroot was not designed to deal with large images, and currently cross rebuilds are a bit slow, due to some image generation and validation steps.
A few workarounds are:
-
develop in host first as much as you can. Our PARSEC fork supports it.
If you do this, don’t forget to do a:
cd parsec-benchmark/parsec-benchmark git clean -xdf .
before going for the cross compile build.
-
patch Buildroot to work well, and keep cross compiling all the way. This should be totally viable, and we should do it.
Don’t forget to explicitly rebuild PARSEC with:
./build -a arm -B 'BR2_PACKAGE_PARSEC_BENCHMARK=y' -g parsec-benchmark-reconfigure
You may also want to test if your patches are still functionally correct inside of QEMU first, which is a faster emulator.
-
sell your soul, and compile natively inside the guest. We won’t do this, not only because it is evil, but also because Buildroot explicitly does not support it: https://buildroot.org/downloads/manual/manual.html#faq-no-compiler-on-target ARM employees have been known to do this: https://github.com/arm-university/arm-gem5-rsk/blob/aa3b51b175a0f3b6e75c9c856092ae0c8f2a7cdc/parsec_patches/qemu-patch.diff
Analogous to QEMU:
./run -a arm -e 'init=/poweroff.out' -g
Internals: when we give --command-line= to gem5, it overrides default command lines, including some mandatory ones which are required to boot properly.
Our run script hardcodes the require options in the default --command-line and appends extra options given by -e.
To find the default options in the first place, we removed --command-line and ran:
./run -a arm -g
and then looked at the line of the Linux kernel that starts with:
Kernel command line:
Analogous to QEMU, on the first shell:
./run -a arm -d -g
On the second shell:
./rungdb -a arm -g
On a third shell:
./gem5-shell
When you want to break, just do a Ctrl-C on GDB shell, and then continue.
And we now see the boot messages, and then get a shell. Now try the /count.sh procedure described for QEMU: GDB step debug kernel post-boot.
TODO: how to stop at start_kernel? gem5 listens for GDB by default, and therefore does not wait for a GDB connection to start like QEMU does. So when GDB connects we might have already passed start_kernel. Maybe --debug-break=0 can be used? https://stackoverflow.com/questions/49296092/how-to-make-gem5-wait-for-gdb-to-connect-to-reliably-break-at-start-kernel-of-th
TODO: GDB fails with:
Reading symbols from vmlinux...done. Remote debugging using localhost:7000 Remote 'g' packet reply is too long: 000000000000000090a4f90fc0ffffff4875450ec0ffffff01000000000000000100000000000000000000000000000001000000000000000000000000000000ffffffffffffffff646d60616b64fffe7f7f7f7f7f7f7f7f0101010101010101300000000000000000000000ffffffff48454422207d2c2017162f21262820160100000000000000070000000000000001000000000000004075450ec0ffffffc073450ec0ffffff82080000000000004075450ec0ffffff8060f90fc0ffffffc073450ec0fffffff040900880ffffff40ab400ec0ffffff586d900880ffffff0068a20ec0ffffff903b010880ffffffc8ff210880ffffff903b010880ffffffccff210880ffffff050000200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
and gem5 says:
4107766500: system.remote_gdb: remote gdb attached warn: Couldn't read data from debugger. 4107767500: system.remote_gdb: remote gdb detached
I’ve also tried the fix at: https://stackoverflow.com/questions/27411621/remote-g-packet-reply-is-too-long-aarch64-arm64 by adding to the rungdb script:
-ex 'set tdesc filename out/aarch64/buildroot/build/gdb-7.11.1/./gdb/features/aarch64.xml'
but it did not help.
We are unable to use gdbserver because of networking: gem5 host to guest networking
The alternative is to do as in GDB step debug userland processes.
First make sure that for your arch the kernel debugging on the given target works for the architecture: gem5 GDB step debug, on which we rely. When we last tested, this was not the case for aarch64: gem5 GDB step debug kernel aarch64
Next, follow the exact same steps explained at [gdb-step-debug-userland-non-init-without—d], but passing -g to every command as usual.
But then TODO (I’ll still go crazy one of those days): for arm, while debugging /myinsmod.out /hello.ko, after then line:
23 if (argc < 3) {
24 params = "";
I press n, it just runs the program until the end, instead of stopping on the next line of execution. The module does get inserted normally.
TODO:
./rungdb-user -a arm -g gem5-1.0/gem5/util/m5/m5 main
breaks when m5 is run on guest, but does not show the source code.
Analogous to QEMU’s Snapshot, but better since it can be started from inside the guest, so we can easily checkpoint after a specific guest event, e.g. just before init is done.
Documentation: http://gem5.org/Checkpoints
./run -a arm -g
In the guest, wait for the boot to end and run:
m5 checkpoint
where m5 is a guest utility present inside the gem5 tree which we cross-compiled and installed into the guest.
To restore the checkpoint, kill the VM and run:
./run -a arm -g -- -r 1
Let’s create a second checkpoint to see how it works, in guest:
date >f m5 checkpoint
Kill the VM, and try it out:
./run -a arm -g -- -r 2
and now in the guest:
cat f
contains the date. The file f wouldn’t exist had we used the first checkpoint with -r 1.
If you automate things with Kernel command line parameters as in:
./run -a arm -E 'm5 checkpoint;m5 resetstats;dhrystone 1000;m5 exit' -g
Then there is no need to pass the kernel command line again to gem5 for replay:
./run -a arm -g -- -r 1
since boot has already happened, and the parameters are already in the RAM of the snapshot.
Checkpoints are stored inside the m5out directory at:
out/<arch>/gem5/<gem5-variant>/<run-id>/m5out/cpt.<checkpoint-time>
and TODO confirm the -r N tag takes the N-th checkpoint with the longest running time, which is not necessarily the last one that was taken, unless you take the second one on the same simulation as the first one.
This integer value is just pure fs.py sugar, the backend at m5.instantiate just takes the actual tracepoint directory as input.
You want to automate running several tests from a single pristine post-boot state.
The problem is that boot takes forever, and after the checkpoint, the memory and disk states are fixed, so you can’t for example:
-
hack up an existing rc script, since the disk is fixed
-
inject new kernel boot command line options, since those have already been put into memory by the bootloader
There is however one loophole: m5 readfile, which reads whatever is present on the host, so we can do it like:
printf 'echo "setup run";m5 exit' > data/readfile ./run -a aarch64 -g -E 'm5 checkpoint;m5 readfile > a.sh;sh a.sh' printf 'echo "first benchmark";m5 exit' > data/readfile ./run -a aarch64 -g -- -r 1 printf 'echo "second benchmark";m5 exit' > data/readfile ./run -a aarch64 -g -- -r 1
Since this is such a common setup, we provide helper for it at: rootfs_overlay/gem5.sh.
Other loophole possibilities include:
-
create multiple disk images, and mount the benchmark from on one of them
-
expectas mentioned at: https://stackoverflow.com/questions/7013137/automating-telnet-session-using-bash-scripts#!/usr/bin/expect spawn telnet localhost 3456 expect "# $" send "pwd\r" send "ls /\r" send "m5 exit\r" expect eof
gem5 can switch to a different CPU model when restoring a checkpoint.
A common combo is to boot Linux with a fast CPU, make a checkpoint and then replay the benchmark of interest with a slower CPU.
An illustrative interactive run:
./run -a arm -g
In guest:
m5 checkpoint
And then restore the checkpoint with a different CPU:
./run -a arm -g -- --caches -r 1 --restore-with-cpu=HPI
Pass options to the fs.py script:
-
get help:
./run -g -- -h
-
boot with the more detailed and slow
HPICPU model:./run -a arm -g -- --caches --cpu-type=HPI
Pass options to the gem5 executable itself:
-
get help:
./run -G '-h' -g
Quit the simulation after 1024 instructions:
./run -g -- -I 1024
Can be nicely checked with gem5 tracing.
Cycles instead of instructions:
./run -g -- -m 1024
Otherwise the simulation runs forever by default.
m5ops are magic instructions which lead gem5 to do magic things, like quitting or dumping stats.
Documentation: http://gem5.org/M5ops
There are two main ways to use m5ops:
m5 is convenient if you only want to take snapshots before or after the benchmark, without altering its source code. It uses the m5ops instructions as its backend.
m5 cannot should / should not be used however:
-
in bare metal setups
-
when you want to call the instructions from inside interest points of your benchmark. Otherwise you add the syscall overhead to the benchmark, which is more intrusive and might affect results.
Why not just hardcode some m5ops instructions as in our example instead, since you are going to modify the source of the benchmark anyways?
m5 is a guest command line utility that is installed and run on the guest, that serves as a CLI front-end for the m5ops
Its source is present in the gem5 tree: https://github.com/gem5/gem5/blob/6925bf55005c118dc2580ba83e0fa10b31839ef9/util/m5/m5.c
It is possible to guess what most tools do from the corresponding m5ops, but let’s at least document the less obvious ones here.
Quit gem5 with exit status 0.
Send a guest file to the host. 9P is a more advanced alternative.
Guest:
echo mycontent > myfileguest m5 writefile myfileguest myfilehost
Host:
cat out/aarch64/gem5/default/0/m5out/myfilehost
Does not work for subdirectories, gem5 crashes:
m5 writefile myfileguest mydirhost/myfilehost
Host:
date > data/readfile
Guest:
m5 readfile
Host:
printf '#!/bin/sh echo asdf' > data/readfile
Guest:
touch /tmp/execfile chmod +x /tmp/execfile m5 execfile
The executable /m5ops.out illustrates how to hard code with inline assembly the m5ops that you are most likely to hack into the benchmark you are analysing:
# checkpoint /m5ops.out c # dumpstats /m5ops.out d # dump exit /m5ops.out e # dump resetstats /m5ops.out r
Source: kernel_module/user/m5ops.c
That executable is of course a subset of m5 and useless by itself: its goal is only illustrate how to hardcode some m5ops yourself as one-liners.
In theory, the cleanest way to add m5ops to your benchmarks would be to do exactly what the m5 tool does:
-
include
include/gem5/asm/generic/m5ops.h -
link with the
.ofile underutil/m5for the correct arch, e.g.m5op_arm_A64.ofor aarch64.
However, I think it is usually not worth the trouble of hacking up the build system of the benchmark to do this, and I recommend just hardcoding in a few raw instructions here and there, and managing it with version control + sed.
Let’s study how m5 uses them:
-
include/gem5/asm/generic/m5ops.h: defines the magic constants that represent the instructions -
util/m5/m5op_arm_A64.S: use the magic constants that represent the instructions using C preprocessor magic -
util/m5/m5.c: the actual executable. Gets linked tom5op_arm_A64.Swhich defines a function for each m5op.
We notice that there are two different implementations for each arch:
-
magic instructions, which don’t exist in the corresponding arch
-
magic memory addresses on a given page
TODO: what is the advantage of magic memory addresses? Because you have to do more setup work by telling the kernel never to touch the magic page. For the magic instructions, the only thing that could go wrong is if you run some crazy kind of fuzzing workload that generates random instructions.
Then, in aarch64 magic instructions for example, the lines:
.macro m5op_func, name, func, subfunc
.globl \name
\name:
.long 0xff000110 | (\func << 16) | (\subfunc << 12)
ret
define a simple function function for each m5op. Here we see that:
-
0xff000110is a base mask for the magic non-existing instruction -
\funcand\subfuncare OR-applied on top of the base mask, and define m5op this is.Those values will loop over the magic constants defined in
m5ops.hwith the deferred preprocessor idiom.For example,
exitis0x21due to:#define M5OP_EXIT 0x21
Finally, m5.c calls the defined functions as in:
m5_exit(ints[0]);
Therefore, the runtime "argument" that gets passed to the instruction, e.g. the desired exit status in the case of exit, gets passed directly through the aarch64 calling convention.
That convention specifies that x0 to x7 contain the function arguments, so x0 contains the first argument, and x1 the second.
In our m5ops example, we just hardcode everything in the assembly one-liners we are producing.
We ignore the \subfunc since it is always 0 on the ops that interest us.
include/gem5/asm/generic/m5ops.h also describes some annotation instructions.
https://gem5.googlesource.com/arm/linux/ contains an ARM Linux kernel fork with a few gem5 specific Linux kernel patches on top of mainline created by ARM Holdings.
Those patches look interesting, but it is obviously not possible to understand what they actually do from their commit message.
So let’s explain them one by one here as we understand them:
-
drm: Add component-aware simple encoderallows you to see images through VNC: Graphic mode gem5 -
gem5: Add support for gem5’s extended GIC modeadds support for more than 8 cores: https://stackoverflow.com/questions/50248067/how-to-run-a-gem5-arm-aarch64-full-system-simulation-with-fs-py-with-more-than-8/50248068#5024806
We use the m5term in-tree executable to connect to the terminal instead of a direct telnet.
If you use telnet directly, it mostly works, but certain interactive features don’t, e.g.:
-
up and down arrows for history havigation
-
tab to complete paths
-
Ctrl-Cto kill processes
TODO understand in detail what m5term does differently than telnet.
Lets try to understand some stats better.
x86 instruction that returns the cycle count since reset:
./build -kg && ./run -E '/rdtsc.out;m5 exit;' -g ./gem5-stat
Source: kernel_module/user/rdtsc.c
rdtsc outputs a cycle count which we compare with gem5’s gem5-stat:
-
3828578153:rdtsc -
3830832635:gem5-stat
which gives pretty close results, and serve as a nice sanity check that the cycle counter is coherent.
It is also nice to see that rdtsc is a bit smaller than the stats.txt value, since the latter also includes the exec syscall for m5.
Bibliography:
TODO We didn’t manage to find a working ARM analogue to rdtsc: kernel_module/pmccntr.c is oopsing, and even it if weren’t, it likely won’t give the cycle count since boot since it needs to be activate before it starts counting anything:
We have made a crazy setup that allows you to just cd into gem5/gem5, and edit Python scripts directly there.
This is not normally possible with Buildroot, since normal Buildroot packages first copy files to the output directory (out/<arch>/buildroot/build/<pkg>), and then build there.
So if you modified the Python scripts with this setup, you would still need to ./build to copy the modified files over.
For gem5 specifically however, we have hacked up the build so that we cd into the gem5/gem5 tree, and then do an out of tree build to out/common/gem5.
Another advantage of this method is the we factor out the arm and aarch64 gem5 builds which are identical and large, as well as the smaller arch generic pieces.
Using Buildroot for gem5 is still convenient because we use it to:
-
to cross build
m5for us -
check timestamps and skip the gem5 build when it is not requested
The out of build tree is required, because otherwise Buildroot would copy the output build of all archs to each arch directory, resulting in arch^2 build copies, which is significant.
By default, we use configs/example/fs.py script.
The -X-b option enables the alternative configs/example/arm/fs_bigLITTLE.py script instead.
First apply:
echo '
diff --git a/configs/example/arm/fs_bigLITTLE.py b/configs/example/arm/fs_bigLITTLE.py
index 7d66c03a6..d71e714fe 100644
--- a/configs/example/arm/fs_bigLITTLE.py
+++ b/configs/example/arm/fs_bigLITTLE.py
@@ -194,7 +194,7 @@ def build(options):
"norandmaps",
"loglevel=8",
"mem=%s" % default_mem_size,
- "root=/dev/vda1",
+ "root=/dev/vda",
"rw",
"init=%s" % options.kernel_init,
"vmalloc=768MB",
' | patch -d gem5/gem5 -p1
then:
./run -aA -g -X-b
Checkpoints can be restored with:
./run -aA -g -X-b -- --restore-from=out/aarch64/gem5/default/0/m5ou5/cpt.*
Advantages over fs.py:
-
more representative of mobile ARM SoCs, which almost always have big little cluster
-
simpler than
fs.py, and therefore easier to understand and modify
Disadvantages over fs.py:
-
only works for ARM, not other archs
-
not as many configuration options as
fs.py, many things are hardcoded
We setup 2 big and 2 small CPUs, but cat /proc/cpuinfo shows 4 identical CPUs instead of 2 of two different types, likely because gem5 does not expose some informational register much like the caches: https://www.mail-archive.com/[email protected]/msg15426.html config.ini does show that the two big ones are DerivO3CPU and the small ones are MinorCPU.
TODO: why is the --dtb required despite fs_bigLITTLE.py having a DTB generation capability? Without it, nothing shows on terminal, and the simulation terminates with simulate() limit reached @ 18446744073709551615. The magic vmlinux.vexpress_gem5_v1.20170616 works however without a DTB.
Tested on: 18c1c823feda65f8b54cd38e261c282eee01ed9f
We provide the following mechanisms:
-
./build -b data/br2: append the Buildroot configuration filedata/br2to a single build. Must be passed every time you run./build. The format is the same as br2/default. -
./build -B 'BR2_SOME_OPTION="myval"': append a single option to a single build.
You will then likely want to make those more permanent with: Don’t retype arguments all the time
If you are benchmarking compiled programs instead of hand written assembly, remember that we configure Buildroot to disable optimizations by default with:
BR2_OPTIMIZE_0=y
to improve the debugging experience.
You will likely want to change that to:
BR2_OPTIMIZE_3=y
Our kernel_module/user package correctly forwards the Buildroot options to the build with $(TARGET_CONFIGURE_OPTS), so you don’t have to do any extra work.
Don’t forget to do that if you are adding a new package with your own build system.
Then, you have two choices:
-
if you already have a full
-O0build, you can choose to rebuild just your package of interest to save some time as described at: Rebuild a package with different build options./build -B 'BR2_OPTIMIZE_3=y' kernel_module-dirclean kernel_module-reconfigure
However, this approach might not be representative since calls to an unoptimized libc and other libraries will have a negative performance impact.
Maybe you can get away with rebuilding libc, but I’m not sure that it will work properly.
Kernel-wise it should be fine though due to: Disable kernel compiler optimizations
-
clean the build and rebuild from scratch:
mv out out~ ./build -B 'BR2_OPTIMIZE_3=y'
make menuconfig is a convenient way to find Buildroot configurations:
cd out/x86_64/buildroot make menuconfig
Hit / and search for the settings.
Save and quit.
diff -u .config.olg .config
Then copy and paste the diff additions to br2/default to make them permanent.
At startup, we login automatically as the root user.
If you want to switch to another user to test some permissions, we have already created an user0 user through the user_table file, and you can just login as that user with:
login user0
and password:
a
Then test that the user changed with:
id
which gives:
uid=1000(user0) gid=1000(user0) groups=1000(user0)
We have enabled ccached builds by default.
BR2_CCACHE_USE_BASEDIR=n is used, which means that:
-
absolute paths are used and GDB can find source files
-
but builds are not reused across separated LKMC directories
ccache can considerably speed up builds when you:
-
are switching between multiple configurations for a given package to bisect something out, as mentioned at: Modify kernel config
-
clean the build because things stopped working. We store the cache outside of this repository, so you can nuke away without fear
The default ccache environment variables are honored if you have them set, which we recommend you do. E.g., in your .bashrc:
export CCACHE_DIR=~/.ccache export CCACHE_MAXSIZE="20G"
The choice basically comes down to:
-
should I store my cache on my HD or SSD?
-
how big is my build, and how many build configurations do I need to keep around at a time?
If you don’t set it, the default is to use ~/.buildroot-ccache with 5G, which is a bit small for us.
I find it very relaxing to watch ccache at work with:
watch -n1 'make -C out/x86_64/buildroot/ ccache-stats'
or if you have it installed on host and the environment variables exported simply with:
watch -n1 'ccache -s'
while a build is going on in another terminal and my cooler is humming. Especially when the hit count goes up ;-) The joys of system programming.
First, see if you can’t get away without actually adding a new package, for example:
-
if you have a standalone C file with no dependencies besides the C standard library to be compiled with GCC, just add a new file under kernel_module/user and you are done
-
if you have a dependency on a library, first check if Buildroot doesn’t have a package for it already with
ls buildroot/package. If yes, just enable that package as explained at: Custom Buildroot options
If none of those methods are flexible enough for you, create a new package as follows:
-
use packages/sample_package as a starting point
-
fork this repository, and modify that package to do what you want
-
read the comments on that package to get an idea of how to start
-
check the main manual for more complicated things: https://buildroot.org/downloads/manual/manual.html
-
don’t forget to rebuild with:
./build -- sample_package-reconfigure ./run -F '/sample_package.out'
if you make any changes to that package after the initial build: Rebuild
It often happens that you are comparing two versions of the build, a good and a bad one, and trying to figure out why the bad one is bad.
This section describes some techniques that can help to reduce the build time and disk usage in those situations.
The most coarse thing you can do is to keep two full checkouts of this repository, possibly with git subtree.
This approach has the advantage of being simple and robust, but it wastes a lot of space and time for the full rebuild, since ccache does not make compilation instantaneous due to configuration file reading.
The next less coarse approach, is to use the -s option:
./build -s mybranch
which generates a full new build under out/ named for example as out/x86_64-mybranch, but at least avoids copying up the source.
TODO: only -s works for ./build, e.g. if you want to ./run afterwards you need to manually mv build around. This should be easy to patch however.
Since the Linux kernel is so important to us, we have created a convenient dedicated mechanism for it.
For example, if you want to keep two builds around, one for the latest Linux version, and the other for Linux v4.16:
./build git -C linux checkout v4.16 ./build -L v4.16 git -C linux checkout - ./run ./run -L v4.16
The -L option should be passed to all scripts that support it, much like -a for the CPU architecture, e.g. to step debug:
./rungdb -L v4.16
This technique is implemented semi-hackishly by moving symlinks around inside the Buildroot build dir at build time, and selecting the right build directory at runtime.
Analogous to the Linux kernel build variants but with the -M option instead:
./build -g git -C gem5/gem5 checkout some-branch ./build -g -M some-branch git -C gem5/gem5 checkout - ./run -g git -C gem5/gem5 checkout some-branch ./run -M some-branch -g
Don’t forget however that gem5 has Python scripts in its source code tree, and that those must match the source code of a given build.
Therefore, you can’t forget to checkout to the sources to that of the corresponding build before running, unless you explicitly tell gem5 to use a non-default source tree with -N.
This becomes inevitable when you want to launch gem5 simultaneous runs with build variants.
In order to checkout multiple gem5 builds and run them simultaneously, you also need to use the -N:
./build -g git -C gem5/gem5 checkout some-branch ./build -g -M some-branch -N some-branch git -C gem5/gem5 checkout - ./run -g -n 0 &>/dev/null & ./run -g -M some-branch -N some-branch -n 1 &>/dev/null &
The -N <woktree-id> determines the location of the gem5 tree to be used for both:
-
the input C files of the build at build time
-
the Python scripts to be used at runtime
The difference between -M and -N is that -M specifies the gem5 build output directory, while -N specifies the source input directory.
When -N is not given, source tree under gem5/gem5 is used.
If -N <worktree-id> is given, the directory used is data/gem5/<worktree-id>, and:
-
if that directory does not exist, create a
git worktreeat a branchwt/<worktree-id>on current commit ofgem5/gem5there.The
wt/branch name prefix stands forWorkTree, and is done to allow us to checkout to a testsome-branchbranch undergem5/gem5and still use-N some-branch, without conflict for the worktree branch, which can only be checked out once. -
otherwise, leave that worktree untouched, without updating it
-N is only required if you have multiple gem5 checkouts, e.g. it would not be required for multiple builds of the same tree, e.g. a gem5 debug build and a non-debug one.
Built and run gem5.debug, which has optimizations turned off unlike the default gem5.opt:
./build -aA -g -M debug -t debug ./run -aA -g -M debug -t debug
-M is optional just to prevent it from overwriting the opt build.
A Linux kernel boot was about 14 times slower than opt at 71e927e63bda6507d5a528f22c78d65099bdf36f between the commands:
./run -aA -E 'm5 exit' -g -L v4.16 ./run -aA -E 'm5 exit' -g -M debug -t debug -L v4.16
Therefore the performance different is very big, making debug mode almost unusable.
This hack-ish technique allows us to rebuild just one package at a time:
./build KERNEL_MODULE_VERSION=mybranch
and now you can see that a new version of kernel_module was built and put inside the image:
ls out/x86_64/buildroot/build/kernel_module-mybranch
Unfortunately we don’t have a nice runtime selection with ./run implemented currently, you have to manually move packages around.
TODO: is there a way to do it nicely for *_OVERRIDE_SRCDIR packages from buildroot_override? I tried:
./build -l LINUX_VERSION=mybranch
but it fails with:
linux/linux.mk:492: *** LINUX_SITE cannot be empty when LINUX_SOURCE is not. Stop.
and theI tried:
./build -l LINUX_VERSION=mybranch LINUX_SITE="$(pwd)/linux"
but it feels hackish, and the build was slower than normal, looks like the build was single threaded?
When adding new large package to the Buildroot root filesystem, it may fail with the message:
Maybe you need to increase the filesystem size (BR2_TARGET_ROOTFS_EXT2_SIZE)
The solution is to simply add:
./build -B 'BR2_TARGET_ROOTFS_EXT2_SIZE="512M"'
where 512Mb is "large enough".
Note that dots cannot be used as in 1.5G, so just use Megs as in 1500M instead.
Unfortunately, TODO we don’t have a perfect way to find the right value for BR2_TARGET_ROOTFS_EXT2_SIZE. One good heuristic is:
du -hsx out/arm-gem5/buildroot/target/
libguestfs is very promising https://serverfault.com/questions/246835/convert-directory-to-qemu-kvm-virtual-disk-image/916697#916697, in particular vfs-minimum-size.
One way to overcome this problem is to mount benchmarks from host instead of adding them to the root filesystem, e.g. with: 9P.
Buildroot is not designed for large root filesystem images, and the rebuild becomes very slow when we add a large package to it.
This is due mainly to the pkg-generic GLOBAL_INSTRUMENTATION_HOOKS sanitation which go over the entire tree doing complex operations… I no like, in particular check_bin_arch and check_host_rpath
We have applied 983fe7910a73923a4331e7d576a1e93841d53812 to out Buildroot fork which removes part of the pain by not running:
>>> Sanitizing RPATH in target tree
which contributed to a large part of the slowness.
Test how Buildroot deals with many files with:
./build -B BR2_PACKAGE_LKMC_MANY_FILES=y -- lkmc_many_files-reconfigure |& ts -i '%.s' ./build |& ts -i '%.s'
and notice how the second build, which does not rebuilt the package at all, still gets stuck in the RPATH check forever without our Buildroot patch.
When asking for help on upstream repositories outside of this repository, you will need to provide the commands that you are running in detail without referencing our scripts.
For example, QEMU developers will only want to see the final QEMU command that you are running.
For the configure and build, search for the Building and Configuring parts of the build log, then try to strip down all Buildroot related paths, to keep only options that seem to matter.
We make that easy by building commands as strings, and then echoing them before evaling.
So for example when you run:
./run -a arm
Stdout shows a line with the full command of type:
./out/arm/buildroot/host/usr/bin/qemu-system-arm -m 128M -monitor telnet::45454,server,nowait -netdev user,hostfwd=tcp::45455-:45455,id=net0 -smp 1 -M versatilepb -append 'root=/dev/sda nokaslr norandmaps printk.devkmsg=on printk.time=y' -device rtl8139,netdev=net0 -dtb ./out/arm/buildroot/images/versatile-pb.dtb -kernel ./out/arm/buildroot/images/zImage -serial stdio -drive file='./out/arm/buildroot/images/rootfs.ext2.qcow2,if=scsi,format=qcow2'
and this line is also saved to a file for convenience:
cat ./out/arm/qemu/0/run.sh
or for gem5:
cat ./out/arm/gem5/default/0/run.sh
Next, you will also want to give the relevant images to save them time. Zip the images with:
./build-all -G ./zip-img
and then upload the out/images-*.zip file somewhere, e.g. GitHub release assets as in https://github.com/cirosantilli/linux-kernel-module-cheat/releases/tag/test-replay-arm
Finally, do a clone of the relevant repository out of tree and reproduce the bug there, to be 100% sure that it is an actual upstream bug, and to provide developers with the cleanest possible commands.
For QEMU and Buildroot, we have the following convenient setups respectively:
In this section document how benchmark builds and runs of this repo, and how to investigate what the bottleneck is.
Ideally, we should setup an automated build server that benchmarks those things continuously for us, but our Travis attempt failed.
So currently, we are running benchmarks manually when it seems reasonable and uploading them to: https://github.com/cirosantilli/linux-kernel-module-cheat-regression
All benchmarks were run on the P51 machine, unless stated otherwise.
Run all benchmarks and upload the results:
./bench-all -A
We tried to automate it on Travis with .travis.yml but it hits the current 50 minute job timeout: https://travis-ci.org/cirosantilli/linux-kernel-module-cheat/builds/296454523 And I bet it would likely hit a disk maxout either way if it went on.
./bench-boot cat out/bench-boot.txt
Sample results at 2bddcc2891b7e5ac38c10d509bdfc1c8fe347b94:
cmd ./run -a x86_64 -E '/poweroff.out' time 3.58 exit_status 0 cmd ./run -a x86_64 -E '/poweroff.out' -K time 0.89 exit_status 0 cmd ./run -a x86_64 -E '/poweroff.out' -T exec_tb time 4.12 exit_status 0 instructions 2343768 cmd ./run -a x86_64 -E 'm5 exit' -g time 451.10 exit_status 0 instructions 706187020 cmd ./run -a arm -E '/poweroff.out' time 1.85 exit_status 0 cmd ./run -a arm -E '/poweroff.out' -T exec_tb time 1.92 exit_status 0 instructions 681000 cmd ./run -a arm -E 'm5 exit' -g time 94.85 exit_status 0 instructions 139895210 cmd ./run -a aarch64 -E '/poweroff.out' time 1.36 exit_status 0 cmd ./run -a aarch64 -E '/poweroff.out' -T exec_tb time 1.37 exit_status 0 instructions 178879 cmd ./run -a aarch64 -E 'm5 exit' -g time 72.50 exit_status 0 instructions 115754212 cmd ./run -a aarch64 -E 'm5 exit' -g -- --cpu-type=HPI --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB time 369.13 exit_status 0 instructions 115774177
TODO: aarch64 gem5 and QEMU use the same kernel, so why is the gem5 instruction count so much much higher?
TODO 62f6870e4e0b384c4bd2d514116247e81b241251 takes 33 minutes to finish at 62f6870e4e0b384c4bd2d514116247e81b241251:
cmd ./run -a arm -E 'm5 exit' -g -- --caches --cpu-type=HPI
while aarch64 only 7 minutes.
I had previously documented on README 10 minutes at: 2eff007f7c3458be240c673c32bb33892a45d3a0 found with git log search for 10 minutes. But then I checked out there, run it, and kernel panics before any messages come out. Lol?
Logs of the runs can be found at: https://github.com/cirosantilli-work/gem5-issues/tree/0df13e862b50ae20fcd10bae1a9a53e55d01caac/arm-hpi-slow
The cycle count is higher for arm, 350M vs 250M for aarch64, not nowhere near the 5x runtime time increase.
A quick look at the boot logs show that they are basically identical in structure: the same operations appear more ore less on both, and there isn’t one specific huge time pit in arm: it is just that every individual operation seems to be taking a lot longer.
Kernel panic - not syncing: Attempted to kill the idle task!
The build times are calculated after doing ./configure and make source, which downloads the sources, and basically benchmarks the Internet.
Sample build time at 2c12b21b304178a81c9912817b782ead0286d282: 28 minutes, 15 with full ccache hits. Breakdown: 19% GCC, 13% Linux kernel, 7% uclibc, 6% host-python, 5% host-qemu, 5% host-gdb, 2% host-binutils
Single file change on ./build kernel_module-reconfigure: 7 seconds.
Buildroot automatically stores build timestamps as milliseconds since Epoch. Convert to minutes:
awk -F: 'NR==1{start=$1}; END{print ($1 - start)/(60000.0)}' out/x86_64/buildroot/build/build-time.log
Or to conveniently do a clean build without affecting your current one:
./bench-all -b cat ../linux-kernel-module-cheat-regression/*/build-time.log
cd out/x86_64/buildroot make graph-build graph-depends xdg-open graphs/build.pie-packages.pdf xdg-open graphs/graph-depends.pdf
Our philosophy is:
-
if something adds little to the build time, build it in by default
-
otherwise, make it optional
-
try to keep the toolchain (GCC, Binutils) unchanged, otherwise a full rebuild is required.
So we generally just enable all toolchain options by default, even though this adds a bit of time to the build.
-
if something is very valuable, we just add it by default even if it increases the Build time, notably GDB and QEMU
-
runtime is sacred.
We do our best to reduce the instruction and feature count to the bare minimum needed, to make the system:
-
easier to understand
-
run faster, specially for gem5
One possibility we could play with is to build loadable modules instead of built-in modules to reduce runtime, but make it easier to get started with the modules.
-
The biggest build time hog is always GCC, and it does not look like we can use a precompiled one: https://stackoverflow.com/questions/10833672/buildroot-environment-with-host-toolchain
This is the minimal build we could expect to get away with.
We will run this whenever the Buildroot submodule is updated.
On the upstream Buildroot repo at :
./bench-all -B
Sample time on 2017.08: 11 minutes, 7 with full ccache hits. Breakdown: 47% GCC, 15% Linux kernel, 9% uclibc, 5% host-binutils. Conclusions:
-
we have bloated our kernel build 3x with all those delicious features :-)
-
GCC time increased 1.5x by our bloat, but its percentage of the total was greatly reduced, due to new packages being introduced.
make graph-dependsshows that most new dependencies come from QEMU and GDB, which we can’t get rid of anyways.
A quick look at the system monitor reveals that the build switches between times when:
-
CPUs are at a max, memory is fine. So we must be CPU / memory speed bound. I bet that this happens during heavy compilation.
-
CPUs are not at a max, and memory is fine. So we are likely disk bound. I bet that this happens during configuration steps.
This is consistent with the fact that ccache reduces the build time only partially, since ccache should only overcome the CPU bound compilation steps, but not the disk bound ones.
The instructions counts varied very little between the baseline and LKMC, so runtime overhead is not a big deal apparently.
Size:
-
bzImage: 4.4M -
rootfs.cpio: 1.6M
Zipped: 4.9M, rootfs.cpio deflates 50%, bzImage almost nothing.
How long it takes to build gem5 itself.
We will update this whenever the gem5 submoule is updated.
Sample results at gem5 2a9573f5942b5416fb0570cf5cb6cdecba733392: 10 to 12 minutes.
Get results with:
./bench-all -g tail -n+1 ../linux-kernel-module-cheat-regression/*/gem5-bench-build-*.txt
However, I have noticed that for some builds, with the exact same commands, it just take way longer sometimes, but I haven’t been able to pin it down: cirosantilli2/gem5-issues#10
Lenovo ThinkPad P51 laptop:
-
2500 USD in 2018 (high end)
-
Intel Core i7-7820HQ Processor (8MB Cache, up to 3.90GHz) (4 cores 8 threads)
-
32GB(16+16) DDR4 2400MHz SODIMM
-
512GB SSD PCIe TLC OPAL2
-
NVIDIA Quadro M1200 Mobile, latest Ubuntu supported proprietary driver
-
Latest Ubuntu
2c12b21b304178a81c9912817b782ead0286d282:
-
shallow clone of all submodules: 4 minutes.
-
make source: 2 minutes
Google M-lab speed test: 36.4Mbps
gem5:
-
https://www.mail-archive.com/[email protected]/msg15262.html which parts of the gem5 code make it slow
-
what are the minimum system requirements:
Multi-call executable that implements: lsmod, insmod, rmmod, and other tools on desktop distros such as Ubuntu 16.04, where e.g.:
ls -l /bin/lsmod
gives:
lrwxrwxrwx 1 root root 4 Jul 25 15:35 /bin/lsmod -> kmod
and:
dpkg -l | grep -Ei
contains:
ii kmod 22-1ubuntu5 amd64 tools for managing Linux kernel modules
BusyBox also implements its own version of those executables. There are some differences.
Buildroot also has a kmod package, but we are not using it since BusyBox' version is good enough so far.
This page will only describe features that differ from kmod to the BusyBox implementation.
Name of a predecessor set of tools.
kmod’s modprobe can also load modules under different names to avoid conflicts, e.g.:
sudo modprobe vmhgfs -o vm_hgfs
platform_device contains a minimal runnable example.
Good format descriptions:
Minimal example
/dts-v1/;
/ {
a;
};
Check correctness with:
dtc a.dts
Separate nodes are simply merged by node path, e.g.:
/dts-v1/;
/ {
a;
};
/ {
b;
};
then dtc a.dts gives:
/dts-v1/;
/ {
a;
b;
};
This is specially interesting because QEMU and gem5 are capable of generating DTBs that match the selected machine depending on dynamic command line parameters for some types of machines.
QEMU’s -M virt for example, which we use by default for aarch64, boots just fine without the -dtb option:
./run -a aarch64
Then, from inside the guest:
dtc -I fs -O dts /sys/firmware/devicetree/base
contains:
cpus {
#address-cells = <0x1>;
#size-cells = <0x0>;
cpu@0 {
compatible = "arm,cortex-a57";
device_type = "cpu";
reg = <0x0>;
};
};
However, if we increase the number of cores:
./run -a aarch64 -c 2
QEMU automatically adds a second CPU to the DTB!
The action seems to be happening at: hw/arm/virt.c.
gem5 fs_bigLITTLE 2a9573f5942b5416fb0570cf5cb6cdecba733392 can also generate its own DTB.
-
data: gitignored user created data. Deleting this might lead to loss of data. Of course, if something there becomes is important enough to you, git track it.-
data/readfile: see m5 readfile -
data/9p: see 9P -
data/gem5/<variant>: see: gem5 build variants
-
-
kernel_module: Buildroot package that contains our kernel modules and userland C tests
-
out: gitignored Build outputs. You won’t lose data by deleting this folder since everything there can be re-generated, only time.-
out/<arch>: arch specific outputs-
out/<arch>/buildroot: standard Buildroot output-
out/<arch>/buildroot/build/linux-custom: symlink to a variant, custom madness that we do on top of Buildroot: Linux kernel build variants -
out/<arch>/buildroot/build/linux-custom.<variant>: whatlinux-custompoints to
-
-
out/<arch>/qemu: QEMU runtime outputs -
out/<arch>/qemu/<run-id>/run.sh: full CLI used to run QEMU. See: Report upstream bugs -
out/<arch>/gem5/<gem5-variant>/<run-id>/: gem5 runtime outputs-
out/<arch>/gem5/<gem5-variant>/<run-id>/m5out -
out/<arch>/gem5/<gem5-variant>/<run-id>/run.sh: full CLI used to run gem5. See: Report upstream bugs
-
-
-
out/common: cross arch outputs, for when we can gain a lot of time and space by sharing things that are common across different archs.-
out/common/dl/: Buildroot caches downloaded source there due toBR2_DL_DIR -
out/common/gem5/:armandaarch64have the same build.-
out/common/gem5/<gem5-variant>/: gem5 build output. In common to share the ARM and aarch64 builds.-
out/common/gem5/<gem5-variant>/build/: main build outputs, including thegem5.optexecutable and object files -
out/common/gem5/<gem5-variant>/system/:M5_PATHdirectory, with DTBs and bootloaders
-
-
-
-
Every .patch file in this directory gets applied to Buildroot before anything else is done.
This directory has been made kind of useless when we decided to use our own Buildroot fork, but we’ve kept the functionality just in case we someday go back to upstream Buildroot.
We Build the gem5 emulator through Buildroot basically just to reuse its timestamping system to avoid rebuilds.
There is also the m5 tool that we must build through Buildroot ans install on the root filesystem, but we could just make two separate builds.
This directory has the following structure:
Has the following structure:
package-name/00001-do-something.patch
The patches are then applied to the corresponding packages before build.
Uses BR2_GLOBAL_PATCH_DIR.
Any directory in that subdirectory is added to BR2_EXTERNAL and become available to the build.
Copied into the target filesystem.
We use it for:
-
customized configuration files
-
userland module test scripts that don’t need to be compiled.
C files for example need compilation, and must go through the regular package system, e.g. through kernel_module/user.
These appear when you do ./some-script -h.
We have to keep them as separate files from the README for that to be possible.
Testing that should be done for every functional patch.
./run -a x86_64 -e '- lkmc_eval="/insrm.sh hello 5;/sbin/ifup -a;wget -S google.com;poweroff;"' ./run -a arm -e '- lkmc_eval="/insrm.sh hello 5;/sbin/ifup -a;wget -S google.com;poweroff;"'
Should:
-
boot
-
show
hello.ko,initandexitmessages -
make a network request
-
shutdown gracefully
We are slowly automating testable guest tests with:
./run -F '/test_all.sh;/poweroff.out' | grep lkmc_test
which outputs lkmc_test_pass or lkmc_test_fail.
Source: rootfs_overlay/test_all.sh.
Shell 1:
./run -d
Shell 2:
./rungdb start_kernel
Should break GDB at start_kernel.
Then proceed to do the following tests:
-
/count.shandb __x64_sys_write -
insmod /timer.koandb lkmc_timer_callback
Basic C and C++ hello worlds:
/hello.out /hello_cpp.out
Output:
hello hello cpp
Sources:
Print out several parameters that normally change randomly from boot to boot:
./run -F '/rand_check.out;/poweroff.out'
Source: kernel_module/user/rand_check.c
This can be used to check the determinism of:
This project is for people who want to learn and modify low level system components:
-
Linux kernel and Linux kernel modules
-
full systems emulators like QEMU and gem5
-
C standard libraries. This could also be put on a submodule if people show interest.
-
Buildroot. We use and therefore document, a large part of its feature set.
Philosophy:
-
automate as much as possible to make things more reproducible
-
do everything from source to make things understandable and hackable
This project should be called "Linux kernel playground", like: https://github.com/Fuzion24/AndroidKernelExploitationPlayground maybe I’ll rename it some day. Would semi conflict with: http://copr-fe.cloud.fedoraproject.org/coprs/jwboyer/kernel-playground/ though.
Once upon a time, there was a boy called Linus.
Linus made a super fun toy, and since he was not very humble, decided to call it Linux.
Linux was an awesome toy, but it had one big problem: it was very difficult to learn how to play with it!
As a result, only some weird kids who were very bored ended up playing with Linux, and everyone thought those kids were very cool, in their own weird way.
One day, a mysterious new kid called Ciro tried to play with Linux, and like many before him, got very frustrated, and gave up.
A few years later, Ciro had grown up a bit, and by chance came across a very cool toy made by the boy Petazzoni and his gang: it was called Buildroot.
Ciro noticed that if you used Buildroot together with Linux, Linux suddenly became very fun to play with!
So Ciro decided to explain to as many kids as possible how to use Buildroot to play with Linux.
And so everyone was happy. Except some of the old weird kernel hackers who wanted to keep their mystique, but so be it.
THE END
Runnable stuff:
-
https://lwn.net/Kernel/LDD3/ the best book, but outdated. Updated source: https://github.com/martinezjavier/ldd3 But examples non-minimal and take too much brain power to understand.
-
https://github.com/satoru-takeuchi/elkdat manual build process without Buildroot, very few and simple kernel modules. But it seem ktest + QEMU working, which is awesome.
./testthere patches ktest config dynamically based on CLI! Maybe we should just steal it since GPL licensed. -
https://github.com/tinyclub/linux-lab Buildroot based, no kernel modules?
-
https://github.com/linux-kernel-labs Yocto based, source inside a kernel fork subdir: https://github.com/linux-kernel-labs/linux/tree/f08b9e4238dfc612a9d019e3705bd906930057fc/tools/labs which the author would like to upstream https://www.reddit.com/r/programming/comments/79w2q9/linux_device_driver_labs_the_linux_kernel/dp6of43/
-
Android AOSP: https://stackoverflow.com/questions/1809774/how-to-compile-the-android-aosp-kernel-and-test-it-with-the-android-emulator/48310014#48310014 AOSP is basically a uber bloated Buildroot (2 hours build vs 30 minutes), Android is Linux based, and QEMU is the emulator backend. These instructions might work for debugging the kernel: https://github.com/Fuzion24/AndroidKernelExploitationPlayground
-
https://github.com/s-matyukevich/raspberry-pi-os Does both an OS from scratch, and annotates the corresponding kernel source code. For RPI3, no QEMU support: s-matyukevich/raspberry-pi-os#8
Theory:
-
http://nairobi-embedded.org you will fall here a lot when you start popping the hard QEMU Google queries. They have covered everything we do here basically, but with a more manual approach, while this repo automates everything.
I couldn’t find the markup source code for the tutorials, and as a result when the domain went down in May 2018, you have to use http://web.archive.org/ to see the pages…
-
https://balau82.wordpress.com awesome low level resource
-
https://rwmj.wordpress.com/ awesome red hatter
Awesome lists:
