output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Yes, yes it can. $ journalctl -o short-monotonic -b [ 0.000000] alan-laptop kernel: microcode: microcode updated early to revision 0x2a, date = 2018-01-18 [ 0.000000] alan-laptop kernel: Linux version 4.15.14-300.fc27.x86_64 ([emailprotected]) (gcc version 7.3.1 20180303 (Red Hat 7.3.1-5) (GCC)) #1 SMP Thu Mar 29 16:13:44 UTC 2018 ... [ 0.000000] alan-laptop kernel: x2apic: IRQ remapping doesn't support X2APIC mode [ 0.001000] alan-laptop kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.006000] alan-laptop kernel: tsc: Fast TSC calibration using PIT [ 0.007000] alan-laptop kernel: tsc: Detected 2294.717 MHz processor [ 0.007000] alan-laptop kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4589.43 BogoMIPS (lpj=2294717)The timestamps match exactly with dmesg, even for suspend/resume. (I do not attempt to nitpick whether this means they are CLOCK_BOOTTIME timestamps, as opposed to CLOCK_MONOTONIC timestamps; possibly the journald field name is confusing but it's exactly what I want).
By default journalctl shows messages in the traditional system log format, including the CLOCK_REALTIME stamp, i.e. wall clock time (and calendar date). However, this doesn't show accurate timestamps for kernel messages, if they were logged when journald wasn't running e.g. during boot or the suspend/resume procedure. Most kernels nowadays default to enabling printk.time, so dmesg shows a timestamp in front of every log message. Can journalctl be made to show the original kernel timestamps? I want to check the precise timings for historical suspend/resume log messages.
View original kernel message timings from historical system logs
Edit the file /etc/default/grub and add the below parameters to the GRUB_CMDLINE_LINUX: $ sudo nano /etc/default/grubGRUB_CMDLINE_LINUX="acpi_osi=Linux acpi_backlight=vendor"Do not forget to run update-grub afterwards. For me that solved the issue! I had it on an ATI/AMD, a rather old gpu. Also I wanted to mention this, didn't break the ability to adjust the screen brightness. May your htop stats be low and your beard grow long!
I am always getting this error at boot. Is there anything major, and if it is, how could be fixed? I get that it has something to do with my AMD GPU. Here is the setup I have:Cpu: AMD A9-9420 RADEON R5, 5 COMPUTE CORES 2C+3G, 2586 MHz GPU: ATI Stoney [Radeon R2/R3/R4/R5 Graphics], AMD ATI Radeon R5 M230 / R7 M260DX / Radeon 520 Mobile 4GB RAM, 250GB SSD OS: Linux manjaro 5.9.16-1-MANJARO #1 SMP PREEMPT Mon Dec 21 22:00:46 UTC 2020 x86_64 GNU/LinuxAlso here's the kernel journal from the last boot: Mar 16 09:24:57 manjaro systemd[1]: Starting Load/Save Screen Backlight Brightness of backlight:acpi_video0... Subject: A start job for unit systemd-backlight@backlight:acpi_video0.service has begun execution Defined-By: systemd Support: https://forum.manjaro.org/c/support A start job for unit systemd-backlight@backlight:acpi_video0.service has begun execution. The job identifier is 1270. Mar 16 09:24:57 manjaro systemd-backlight[875]: Failed to get backlight or LED device 'backlight:acpi_video0': No such device Mar 16 09:24:57 manjaro systemd[1]: systemd-backlight@backlight:acpi_video0.service: Main process exited, code=exited, status=1/FAILURE Subject: Unit process exited Defined-By: systemd Support: https://forum.manjaro.org/c/support An ExecStart= process belonging to unit systemd-backlight@backlight:acpi_video0.service has exited. The process' exit code is 'exited' and its exit status is 1. Mar 16 09:24:57 manjaro systemd[1]: systemd-backlight@backlight:acpi_video0.service: Failed with result 'exit-code'. Subject: Unit failed Defined-By: systemd Support: https://forum.manjaro.org/c/support The unit systemd-backlight@backlight:acpi_video0.service has entered the 'failed' state with result 'exit-code'. Mar 16 09:24:57 manjaro systemd[1]: Failed to start Load/Save Screen Backlight Brightness of backlight:acpi_video0. Subject: A start job for unit systemd-backlight@backlight:acpi_video0.service has failed Defined-By: systemd Support: https://forum.manjaro.org/c/support A start job for unit systemd-backlight@backlight:acpi_video0.service has finished with a failure. The job identifier is 1270 and the job result is failed. Mar 16 09:24:57 manjaro audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-backlight@backlight:acpi_video0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 16 09:24:57 manjaro kernel: audit: type=1130 audit(1615879497.236:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-backlight@backlight:acpi_video0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 16 09:24:57 manjaro systemd[1]: Starting Load/Save Screen Backlight Brightness of backlight:acpi_video1... Subject: A start job for unit systemd-backlight@backlight:acpi_video1.service has begun execution Defined-By: systemd Support: https://forum.manjaro.org/c/support A start job for unit systemd-backlight@backlight:acpi_video1.service has begun execution. The job identifier is 1276. Mar 16 09:24:57 manjaro systemd-backlight[876]: Failed to get backlight or LED device 'backlight:acpi_video1': No such device Mar 16 09:24:57 manjaro systemd[1]: systemd-backlight@backlight:acpi_video1.service: Main process exited, code=exited, status=1/FAILURE Subject: Unit process exited Defined-By: systemd Support: https://forum.manjaro.org/c/support An ExecStart= process belonging to unit systemd-backlight@backlight:acpi_video1.service has exited. The process' exit code is 'exited' and its exit status is 1. Mar 16 09:24:57 manjaro systemd[1]: systemd-backlight@backlight:acpi_video1.service: Failed with result 'exit-code'. Subject: Unit failed Defined-By: systemd Support: https://forum.manjaro.org/c/support The unit systemd-backlight@backlight:acpi_video1.service has entered the 'failed' state with result 'exit-code'. Mar 16 09:24:57 manjaro systemd[1]: Failed to start Load/Save Screen Backlight Brightness of backlight:acpi_video1. Subject: A start job for unit systemd-backlight@backlight:acpi_video1.service has failed Defined-By: systemd Support: https://forum.manjaro.org/c/support A start job for unit systemd-backlight@backlight:acpi_video1.service has finished with a failure. The job identifier is 1276 and the job result is failed. Mar 16 09:24:57 manjaro audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-backlight@backlight:acpi_video1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 16 09:24:57 manjaro kernel: audit: type=1130 audit(1615879497.263:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='unit=systemd-backlight@backlight:acpi_video1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 16 09:24:57 manjaro ModemManager[412]: <info> [base-manager] couldn't check support for device '/sys/devices/pci0000:00/0000:00:02.3/0000:03:00.0': not supported by any plugin
Kernel error: systemd-backlight@backlight:acpi_video0.service
Is there a way to clear systemd/journal messages manually only keeping high priority messages?No.Can the --vacuum-time option of journalctl be combined with any type of filter?No. Your options are:Not logging Logging outside of journald Logging somewhere else if your have a network connection
Is there a way to clear systemd/journal messages manually only keeping high priority messages? I have a system with very limited storage and would like to keep the journal compact while still keeping track of errors. Messages of priority 0..3 should be kept as long as there is sufficient space, messages of lower priority can be cleared if they are older than a day. Can the --vacuum-time option of journalctl be combined with any type of filter?
journalctl - clear low priority messages
SystemMaxFileSize= and RuntimeMaxFileSize= control how large individual journal files may grow at most. This influences the granularity in which disk space is made available through rotation, i.e. deletion of historic data. Defaults to one eighth of the values configured with SystemMaxUse= and RuntimeMaxUse=, so that usually seven rotated journal files are kept as history. If the journal compact mode is enabled (enabled by default), the maximum file size is capped to 4G.That means in theory your journald logs may occupy as much as 28GB per user. To reduce the size of your logs, run sudo journalctl --vacuum-size=100M To avoid the issue, please define SystemMaxUse - I personally have: cat /etc/systemd/journald.conf.d/systemMaxUse.conf [Journal] SystemMaxUse=64MMore than enough for a home PC.
I was perplexed to find that, when running for instance journalctl -f showed my logs to stop on Apr 20th, 8 months ago. I piped to less, journalctl | less and pressed G to go to the end, same thing. Then I did journalctl | less and went down 'slowly' (ctrl+d), this way I was able to go way further than Apr 20th... Theres a LOT in the logs! I think it is because there's a limit on what is being loaded. However, shouldn't the logs get rotated? Or somehow pruned without user intervention? In my journald.conf SystemMaxFileSize/SystemMaxUse isn't set (a solution I found here). I could try the above, but I kinda want to get to the bottom of it. I'm on Manjaro 5.8.18-1. Any help or insights are greatly appreciated. Thanks!
journalctl log way too big?
I make an answer because I haven't enough reputation to comment. Anyway as @thanasisp has said there the strace command line. But there is an other interesting framework to trace software. It's lttng and its gui babeltrace. You can trace the kernel, c binary, python and java software. And here is a quick-start tutorial.
Edited: Since the initial question was too general I will focus on only one program. I am running the command libinput-gestures-setup start, which comes bundled with libinput-gestures and it seems to work but ps shows no record of the process started and libinput-gestures-setup status says that program hasn't been started. journalctl doesn't log anything on it. I would like to see everything that happens when I run the command so I can debug it and get it running. I'm running Arch Linux on kernel 5.9.2, systemd 246.6, util-linux 2.35.2, xorg-server 1.20.9, herbstluftwm 0.8.3
Is there a way to log program specific behavior?
The 64M was coming from an additional config file I was unaware of: /lib/systemd/journald.conf.d/00-systemd-conf.conf. With that additional setting of RuntimeMaxUse=64M removed, I can now set the desired values in /etc/systemd/journald.conf. Tip: use strace on journald startup to see what config files it really uses and which order they are read in. This is how the /lib config file was finally revealed.
Setup Using systemd 244 (244.5+) on kernel 4.19.62. I want to set the total journal storage size to 100MB. Journald is set to volatile storage, so logs end up on /run/log/journal/... and RuntimeMaxUse should be used to set the storage quota as follows in /etc/systemd/journald.conf: [Journal] Storage=volatile RuntimeMaxUse=100M RuntimeMaxFileSize=2MBy tweaking these config values, the journal log size and quota do change and startup messages in journalctl also show change. Between config value changes, I stop systemd-journald.service, delete all system* files under /run/log/journal/... and restart the service. NOTE: when Storage=persistent and logs use /var/log/journal/, the equivalent SystemMaxUse is respected correctly. This appears to only be a bug in volatile/RuntimeMaxUse. Observed bug The RuntimeMaxUse (100M) is ignored. The journal quota is set to 64MB so long as RuntimeMaxFileSize is less than 32M. If RuntimeMaxFileSize is set to more than 32M, the journal quota is set to double that value. RuntimeMaxUse appears to be ignored in both cases. Is this a bug in journald/systemd config handling? Why is RuntimeMaxUse ignored, and a value of 64M or double RuntimeMaxFileSize used as the journal quota instead? I see there are a couple of places in the journald source where max_use can be set to double max_size:https://github.com/systemd/systemd/blob/v244/src/journal/journal-file.c#L3747 https://github.com/systemd/systemd/blob/v244/src/journal/journal-file.c#L3772Bug examples With RuntimeMaxUse=100M and RuntimeMaxFileSize=2M a quota of 64M (instead of my requested 100M) is set, as seen in the journal startup messages: systemd-journald[20312]: Runtime Journal (/run/log/journal/...) is 2.0M, max 64.0M, 62.0M free. -- Runtime Journal (/run/log/journal/...) is currently using 2.0M. -- Maximum allowed usage is set to 64.0M. -- Leaving at least 1.5G free (of currently available 31.2G of disk space). -- Enforced usage limit is thus 64.0M, of which 62.0M are still available.Using RuntimeMaxUse=100M and RuntimeMaxFileSize=31M, 64M still used: systemd-journald[20989]: Runtime Journal (/run/log/journal/...) is 8.0M, max 64.0M, 56.0M free. -- Runtime Journal (/run/log/journal/...) is currently using 8.0M. -- Maximum allowed usage is set to 64.0M. -- Leaving at least 1.5G free (of currently available 31.2G of disk space). -- Enforced usage limit is thus 64.0M, of which 56.0M are still available.Using RuntimeMaxUse=100M and RuntimeMaxFileSize=33M, the quota ends up 66M: systemd-journald[21557]: Runtime Journal (/run/log/journal/...) is 8.0M, max 66.0M, 58.0M free. -- Runtime Journal (/run/log/journal/...) is currently using 8.0M. -- Maximum allowed usage is set to 66.0M. -- Leaving at least 1.5G free (of currently available 31.2G of disk space). -- Enforced usage limit is thus 66.0M, of which 58.0M are still available.Using RuntimeMaxUse=100M and RuntimeMaxFileSize=200M we break past the 100M limit, with 400M seemingly coming from double the RuntimeMaxFileSize of 200M: systemd-journald[25271]: Runtime Journal (/run/log/journal/...) is 8.0M, max 400.0M, 392.0M free. -- Runtime Journal (/run/log/journal/...) is currently using 8.0M. -- Maximum allowed usage is set to 400.0M. -- Leaving at least 1.5G free (of currently available 31.2G of disk space). -- Enforced usage limit is thus 400.0M, of which 392.0M are still available.
journald RuntimeMaxUse is ignored, quota tied to RuntimeMaxFileSize instead
The file system in our case is jffs2. Persistent journal would not work on jffs2. More details are in my Github systemd issue #2571.
My embedded box is running Linux 5.15 with systemd 251 (251.2+). I have configured persistent logging for journal. /etc/systemd/journald.conf [Journal] Storage=persistentCreated folder /var/log/journal. This is mounted on mtd flash partition. ls -alt /var/log/journal/ drwxr-sr-x 2 root systemd- 0 Jan 1 00:03 2b4305f670484d1fa6b9c4deee336b91Jouranld creates a folder under /var/log/journal but I dont see anything getting stored here ever. I don't see journal being persistent across reboots. Journal logs are kept only in /run/log/journal and this is tmpfs in the system and gets erased on each reboot. I have tried journalctl --flush to see if anything gets pushed to inside /var/log/journal but nothing is stored apart form the folder name. journalctl --rotate also has no impact. I seem to be doing everything as per the journalctl documentation but still its not working. Any help?
systemd journal is not stored in /var/log/journal and not persistent after reboots
Systemd journal data is not stored in plain text. It is designed to be read through the journalctl tool. If you wish, you can use strace to confirm that journalctl is indeed reading files from /run/log/journal, among other places: strace journalctl 2>&1 | grep /run/log/journal/
Reading the content of man journalctl, I came across the following: Storage= Controls where to store journal data. One of "volatile", "persistent", "auto" and "none". If "volatile", journal log data will be stored only in memory, i.e. below the /run/log/journal hierarchy (which is created if needed). If "persistent", data will be stored preferably on disk, i.e. below the /var/log/journal hierarchy (which is created if needed), with a fallback to /run/log/journal (which is created if needed), during early boot and if the disk is not writable. "auto" is similar to "persistent" but the directory /var/log/journal is not created if needed, so that its existence controls where log data goes. "none" turns off all storage, all log data received will be dropped. Forwarding to other targets, such as the console, the kernel log buffer, or a syslog socket will still work however. Defaults to "auto".however when trying to less a file there, I get: [root@long-misc-p001 logs]# less /run/log/journal/xxxxxx/system.journal "/run/log/journal/xxxxxx/system.journal" may be a binary file. See it anyway?I haven't set that option to persistent yet, but if I then proceed to less it, I get a binary - is this expected? journactl usually gives me text.
cannot read saved journalctl logs
I do not see anything helpful in man journalctlThere is a --facility option: journalctl --facility=local3
I can do the following to get local3 logs: journalctl SYSLOG_FACILITY=19The magic number 19 is uterly confusing. How can I do the following: journalctl WHAT_TO_PUT_HERE=local3? I do not see anything helpful in man journalctl or journalct -n10 -ojson | jq | less.
How to filter journalctl logs by human readable facility?
Don't ask me why but journalctl _COMM=systemd-coredump doesn't work as it produces: -- No entries --. No big deal. I'm sure it can be done a lot easier by e.g. using JSON output but that will involve using and parsing jq output but since I'm not a proper programmer I decided to use awk instead. Here's what I got: #! /bin/bash journalctl --output=short-unix | awk '{ if ($1 ~ /^[0-9]/) { if ( found == 1 && fname != "") system("touch -d @"unixts" "fname) found=0 } if ($0 ~ "dumped core") { # extract timestamp and process name and generate a filename split($1, arr , ".") if(arr[1] != "") unixts=arr[1] psta=index($0, "(") pend=index($0, ")") pname=substr($0, psta+1, pend-psta-1) fname=pname"-"unixts".txt" found=1 } if (found == 1) print $0 >> fname }'As a result you get nice easy to view files with a bonus of these files having the same timestamps as the appropriate log events: $ ls -la drwxr-xr-x. 2 root root 740 Jan 20 13:12 . drwxrwxrwt. 22 root root 820 Jan 20 13:12 .. -rw-r--r--. 1 root root 105937 Jan 14 19:55 chrome-1673726131.txt -rw-r--r--. 1 root root 73845 Jan 13 22:20 xfce4-panel-1673648402.txt -rw-r--r--. 1 root root 73853 Jan 14 18:05 xfce4-panel-1673719532.txt -rw-r--r--. 1 root root 72205 Jan 16 15:33 xfce4-panel-1673883202.txt -rw-r--r--. 1 root root 73845 Jan 17 12:49 xfce4-panel-1673959785.txt -rw-r--r--. 1 root root 62702 Jan 10 08:31 xfce4-screensav-1673339519.txt -rw-r--r--. 1 root root 62577 Jan 10 08:32 xfce4-screensav-1673339524.txt -rw-r--r--. 1 root root 62702 Jan 11 11:13 xfce4-screensav-1673435632.txt -rw-r--r--. 1 root root 62577 Jan 11 11:14 xfce4-screensav-1673435640.txtHopefully this will be useful for other people as well.
Here's a simple issue. I want an overview, e.g. separate files with debug information for each process which has dumped core extracted from journalctl. Here's a sample output: Jan 17 12:49:45 localhost systemd-coredump[137987]: [🡕] Process 3045 (xfce4-panel) of user 1000 dumped core. Module linux-vdso.so.1 with build-id edcc6cf50d839ad9201a67e8d2de3d1bec5c03fd Module librsvg-2.so.2 with build-id a172ce96c3c2d136fc30361d4c28b4ab736833e6 Metadata for module librsvg-2.so.2 owned by FDO found: { "type" : "rpm", "name" : "librsvg2", "version" : "2.54.5-1.fc37", "architecture" : "x86_64", "osCpe" : "cpe:/o:fedoraproject:fedora:37" } Module libpixbufloader-svg.so with build-id 77cf182593e5e19b8bde9397d50f0f4d5acffe51 Metadata for module libpixbufloader-svg.so owned by FDO found: { "type" : "rpm", "name" : "librsvg2", "version" : "2.54.5-1.fc37", "architecture" : "x86_64", "osCpe" : "cpe:/o:fedoraproject:fedora:37" } ... lots more similar messages ... Jan 17 12:49:45 localhost systemd[1]: [emailprotected]: Deactivated successfully.
Extract journalctl/system log "Process 1234 (processname) of user 1000 dumped core" messages into separate files
A glance at the logrotate status output does seem to confirm that the scality app's logs were rotated. However, there is a bit of confusion here because there are two sets of logs to consider, and what logrotate does does not affect what's returned by journalctl, nor are journald's own files managed by logrotate. Journald Logs man systemd-journald.service lists 5 sources for the logs kept and reported by journald:Kernel log messages These are not output from applications, although they may be influenced by the behaviour of such -- as per the name, they are messages from the kernel. It's the same stuff you can see with dmesg.Simple system log messages, via the libc syslog(3) call This is a traditional logging method that applications including persistent services can make use of. If you are running a traditional syslog implementation instead of or as well as systemd-journald (they can work nicely together), these are output to a set of files in /var/log such as messages and syslog. If not, they are still caught by journald and show up in its service hierarchy. Note that this does not allow for application specific files unless the logger itself is set up to do that, which is beyond the control of the application. Stuff that's sent to syslog is usually tagged with the application name, but it is all lumped together -- which can be very useful, since kernel messages are usually also captured by syslog, so syslog is an interleaved record of system events in real time. However, many applications do not use this at all, and as far as I am aware there is no strong convention for or against that. Note that systemd itself logs to syslog via journald, syslog being something journald can feed.Structured system log messages via the native Journal API This is presumably journald's version of the syslog command, meaning it would have to be explicitly used in application source code. How widespread this is I don't know, but probably not much beyond core linux centric things such as desktop environments, package managers, etc.Standard output and standard error of service units. Capturing these is a default behaviour of systemd-journald, but they can also be redirected via the service file to various places. In my experience, most complex service applications do not use this for much.Audit records, originating from the kernel audit subsystem In context these are really a subcategory of #1.This one isn't explicitly in the manpage, but unit specific journalctl output does obviously include messages from systemd that pertain to the unit in question (eg. Starting XXXXX...).None of these include that bunch of stuff in /var/log/scality, meaning those files have nothing to do with journald or syslog. Application Logs I'm not a scality user, but certainly the stuff in /var/log/scality will be stuff that is created directly by the application, and not anything that's been captured externally. An obvious reason for these things not being used by systemd or journald is that programs write data to all kinds of files, and without some configuration protocol to guide the process, external entities like journald have no way of knowing what is logging data and what is not. There's not much justification for such a protocol either since, as per above, there are already mechanisms in place for applications to make use of syslog etc. Things that write a lot of their own log files usually (and this may be the work of distro packagers) arrange for logrotate to keep them trim. Logrotate predates systemd and has nothing to do with journald. Journald maintains its own persistent storage (if any). In short, the orignal message about how the "Journal has been rotated since unit was started" is not because of logrotate. This is part of journald's own operation. Further, the scality logs in /var/log/scality are not managed by journald, but by scality itself.
Why does journaltcl logs start only today when the last reboot was done 5 days before ? : $ journalctl -e | grep Logs.begin -- Logs begin at Tue 2022-09-06 09:42:37 CEST, end at Tue 2022-09-06 11:04:27 CEST. -- $ last reboot | head -1 reboot system boot 3.10.0-1160.62.1 Thu Sep 1 23:46 - 11:04 (4+11:18) $ journalctl -u scality-sfused -- No entries -- $ egrep -v "^$|^#" /etc/systemd/journald.conf [Journal] RateLimitInterval=2s RateLimitBurst=5000 ForwardToSyslog=yes $ systemctl status scality-sfused | grep Warning Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. $ ls /etc/logrotate.d/scality-sfused ls: cannot access /etc/logrotate.d/scality-sfused: No such file or directory $EDIT0 : Here is a the /var/lib/logrotate/logrotate.status for today : $ cat /var/lib/logrotate/logrotate.status | grep $(date +%Y-%-m-%-d) "/var/log/scality/node/node-10.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-1.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-5.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-5.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-3.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-9.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-9.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-7.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-11.log" 2022-9-6-3:11:1 "/var/log/scality-biziod.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-12.log" 2022-9-6-3:11:1 "/var/log/scality/backup/scality-backup.log.err" 2022-9-6-3:11:1 "/var/log/scality/node/node-11.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-2.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-2.log" 2022-9-6-3:11:1 "/var/log/scality/backup/scality-backup.log.out" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-6.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-6.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-4.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-8.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-12.log" 2022-9-6-3:11:1 "/var/log/scality/node/tier1sync-node-3.log" 2022-9-6-3:11:1 "/var/log/scality-srebuildd.log" 2022-9-6-3:11:1 "/var/log/scality/node/tier1sync-node-7.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-12.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-3.log" 2022-9-6-3:11:1 "/var/log/scality-sagentd.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-3.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-1.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-7.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-7.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-5.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-9.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-10.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-4.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-4.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-2.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-8.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-8.log" 2022-9-6-3:11:1 "/var/log/scality/node/node-6.log" 2022-9-6-3:11:1 "/var/log/scality/node/chord-node-10.log" 2022-9-6-3:11:1 "/var/log/scality/node/chunkapi-node-11.log" 2022-9-6-3:11:1 $EDIT1 : Here is the /etc/anacrontab file : $ cat /etc/anacrontab # /etc/anacrontab: configuration file for anacron# See anacron(8) and anacrontab(5) for details.SHELL=/bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # the maximal random delay added to the base delay of the jobs RANDOM_DELAY=45 # the jobs will be started during the following hours only START_HOURS_RANGE=3-22#period in days delay in minutes job-identifier command 1 5 cron.daily nice run-parts /etc/cron.daily 7 25 cron.weekly nice run-parts /etc/cron.weekly @monthly 45 cron.monthly nice run-parts /etc/cron.monthly
Why does journaltcl logs start only today when the last reboot was done 5 days before?
Try LogLevelMax=error, which simply limits all logging for the unit to log level error or worse. At least for me, this solved a similar issue where a program would spam journalctl with messages. Maybe it also helps in this case, at least in the sense that it would suppress all non-error messages from the child processes. It is quite self-explainatory, but for more details, check the manual for systemd units, it explains what this does exactly: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
I am running SystemD version 249.7+ on a custom embedded Linux device with Kernel 5.10.50. I am using Podman 4.0.2 with Docker-Compose to run a few containers. The problem I have is that the 'conmon' child processes from the podman system service duplicates every single journald log entry that the docker-compose process produces. I want to remove the conmon version of the log entries, and keep only those produced by docker-compose. Journalctl shows the duplicate entries like this: -- Journal begins at Sun 2012-01-01 00:02:21 UTC. -- Mar 30 17:06:15 device conmon[1625]: {"products":["Linux","Windows","Mac"]} Mar 30 17:06:16 device sh[16648]: app1 | App1 endpoint hit ... Mar 30 17:06:16 device sh[16648]: app3 | {"products":["Linux","Windows","Mac"]} Mar 30 17:06:16 device sh[16648]: app2 | App2 endpoint hit ... Mar 30 17:06:16 device sh[16648]: app1 | 10.89.0.3 - - [30/Mar/2022 17:06:15] "GET / HTTP/1.1" 200 - Mar 30 17:06:16 device sh[16648]: app2 | Getting http://app1 ... Mar 30 17:06:16 device sh[16648]: app2 | Status of GET: 200 Mar 30 17:06:16 device sh[16648]: app2 | Results of GET: {"products":["Linux","Windows","Mac"]} Mar 30 17:06:16 device sh[16648]: app2 | Mar 30 17:06:16 device sh[16648]: app2 | 10.89.0.4 - - [30/Mar/2022 17:06:15] "GET / HTTP/1.1" 200 - Mar 30 17:06:21 device conmon[1558]: App2 endpoint hit ... Mar 30 17:06:21 device conmon[1558]: Getting http://app1 ... Mar 30 17:06:22 device sh[16648]: app1 | App1 endpoint hit ... Mar 30 17:06:22 device sh[16648]: app2 | App2 endpoint hit ... Mar 30 17:06:22 device sh[16648]: app1 | 10.89.0.3 - - [30/Mar/2022 17:06:22] "GET / HTTP/1.1" 200 - Mar 30 17:06:22 device sh[16648]: app3 | {"products":["Linux","Windows","Mac"]} Mar 30 17:06:22 device sh[16648]: app2 | Getting http://app1 ... Mar 30 17:06:22 device sh[16648]: app2 | Status of GET: 200 Mar 30 17:06:22 device sh[16648]: app2 | Results of GET: {"products":["Linux","Windows","Mac"]} Mar 30 17:06:22 device sh[16648]: app2 | Mar 30 17:06:22 device sh[16648]: app2 | 10.89.0.4 - - [30/Mar/2022 17:06:22] "GET / HTTP/1.1" 200 - Mar 30 17:06:22 device conmon[1393]: App1 endpoint hit ... Mar 30 17:06:22 device conmon[1393]: Mar 30 17:06:22 device conmon[1393]: 10.89.0.3 - - [30/Mar/2022 17:06:22] "GET / HTTP/1.1" 200 - Mar 30 17:06:22 device conmon[1558]: Status of GET: 200 Mar 30 17:06:22 device conmon[1558]: Mar 30 17:06:22 device conmon[1558]: Results of GET: {"products":["Linux","Windows","Mac"]} Mar 30 17:06:22 device conmon[1558]: Mar 30 17:06:22 device conmon[1558]: 10.89.0.4 - - [30/Mar/2022 17:06:22] "GET / HTTP/1.1" 200 - Mar 30 17:06:22 device conmon[1625]: {"products":["Linux","Windows","Mac"]}The conmon logs are produced by the 'conmon' child processes of the podman.service. root@device:~# systemctl status podman ● podman.service - Podman API Service Loaded: loaded (8;;file://device/lib/systemd/system/podman.service^G/lib/systemd/system/podman.service8;;^G; enabled; vendor preset: enabled) Active: active (running) since Wed 2022-03-30 15:48:55 UTC; 1h 18min ago TriggeredBy: ● podman.socket Docs: 8;;man:podman-system-service(1)^Gman:podman-system-service(1)8;;^G Main PID: 515 (podman) Tasks: 17 (limit: 495) Memory: 11.8M CPU: 2min 11.029s CGroup: /system.slice/podman.service ├─ 515 /usr/bin/podman --log-level=error system service --time=0 ├─1391 /usr/bin/dnsmasq -u root --conf-file=/run/containers/cni/dnsname/docker-compose_host_internal_net/dnsmasq.conf ├─1393 /usr/bin/conmon --api-version 1 -c 6739cff6019d2f7e8f123d6fb02f163ec99ee73d322672c41d81f85d6218c66f -u 6739cff6019d2f7e8f123d6fb02f163ec99ee73d322672c41d81f85d6218c66f -r /usr/bin/crun -b /con> ├─1558 /usr/bin/conmon --api-version 1 -c ae34f69196a5d1b332f2f137942d3728c24bb41d06392b13dcfc7296f39b7936 -u ae34f69196a5d1b332f2f137942d3728c24bb41d06392b13dcfc7296f39b7936 -r /usr/bin/crun -b /con> └─1625 /usr/bin/conmon --api-version 1 -c b94e032b37a8690f847442ab9cdcf7b78aefab45231098d02c60b5f79e5c3474 -u b94e032b37a8690f847442ab9cdcf7b78aefab45231098d02c60b5f79e5c3474 -r /usr/bin/crun -b /con>Mar 30 17:07:19 device conmon[1558]: App2 endpoint hit ... Mar 30 17:07:19 device conmon[1558]: Getting http://app1 ... Mar 30 17:07:20 device conmon[1393]: App1 endpoint hit ... Mar 30 17:07:20 device conmon[1393]: 10.89.0.3 - - [30/Mar/2022 17:07:20] "GET / HTTP/1.1" 200 - Mar 30 17:07:20 device conmon[1558]: Status of GET: 200 Mar 30 17:07:20 device conmon[1558]: Mar 30 17:07:20 device conmon[1558]: Results of GET: {"products":["Linux","Windows","Mac"]} Mar 30 17:07:20 device conmon[1558]: Mar 30 17:07:20 device conmon[1558]: 10.89.0.4 - - [30/Mar/2022 17:07:20] "GET / HTTP/1.1" 200 - Mar 30 17:07:20 device conmon[1625]: {"products":["Linux","Windows","Mac"]}I have tried redirecting the podman.service StandardOutput and StandardErr in the SystemD service file to null - but the conmon logs still show up with journalctl. root@device:~# cat /lib/systemd/system/podman.service [Unit] Description=Podman API Service Requires=podman.socket After=podman.socket Documentation=man:podman-system-service(1) StartLimitIntervalSec=0[Service] Environment=XDG_RUNTIME_DIR= CPUWeight=1000 Type=exec KillMode=process StandardOutput=null StandardError=null Environment=LOGGING="--log-level=error" #ExecStart=/bin/sh -c "/usr/bin/podman $LOGGING system service --time=0 1>/dev/null 2>/dev/null" ExecStart=/usr/bin/podman $LOGGING system service --time=0How can I get rid of the conmon log entries? Thanks.
SystemD JournalD cannot disable child process output
If this is a user unit, use the --user-unit option: journalctl -f --user-unit=test_unit@random_argumentsOtherwise, filter the unit with _SYSTEMD_UNIT: sudo journalctl -f _SYSTEMD_UNIT=test_unit@random_arguments
I have one systemd service which runs with systemctl --user start test_unit@random_arguments. How could I use journalctl to filter all logs of test_unit? If it supports "follow mode", that's even better.
Use journalctl to show logs of specific unit which has a parameter?
This might be somewhat brute-force but you can list all enabled services with systemctl list-unit-files | grep enabledYou can likely disregard most of the services listed as being fairly vanilla. For the ones you want to take a look at, use journalctl -u *servicename*Tedious but it should work. Might be better to use a loop to examine each service and grep for your error message.
I see many entries of "Got unexpected auxiliary data with level=1 and type=2" in journalctl, but there is no hint where these are generated. The number of log entries is high. I cannot brain-link these entries with specific services. How can I find out which service generates the "Got unexpected auxiliary data with level=1 and type=2" messages?
How to find the source of "Got unexpected auxiliary data with level=1 and type=2"
Thanks to @Artem I've found that my issue is that I set my PAGER variable to be export PAGER="/usr/bin/bash -c \"col -b -x | vim -R -c 'set ft=man nolist laststatus=0' -c 'map q :q<cr>' - \"". The smallest reproducible case is export PAGER="/usr/bin/bash -c 'vim -R -'". The solution that I went with is putting this in a script (that I called pager. #!/usr/bin/env bashcol -b -x | vim -R -c 'set ft=man nolist laststatus=0' -c 'map q :q<cr>' - And did export PAGER=pager. The issue might be that systemd doesn't interpret PAGER properly.
> journalctl -b: -c: line 0: unexpected EOF while looking for matching `"' -b: -c: line 1: syntax error: unexpected end of file> systemctl status docker.service -b: -c: line 0: unexpected EOF while looking for matching `"' -b: -c: line 1: syntax error: unexpected end of filehttps://clbin.com/0CNIZ <- Link to strace of journalctl I'll update the question if there is need for other information. I've restarted the pc. Deleted the logs and it still fails. Not sure what to try next.
journalctl fails when run
It depends on the freeze, and it's not clear to me what do you mean by "shut it down forcibly". If this means a power off, definitely, there is no way for your PC to sync data in a proper way. The recommended way to force reboot, is by using the magic SysRq key and the REISUBsequence. More details are at https://wiki.archlinux.org/index.php/Sysctl https://en.wikipedia.org/wiki/Magic_SysRq_key Also, check that /var/log/journal exists, writable, persistent and you have enough free space
My PC freezes a third time and I have to shut it down forcibly. Why doesn't journalctl save the boot logs before forced shutdown? When I do journalctl --list-boots I only get the boot after crash. I'm not sorting well or misconfiguration? System: ArchLinux (5.4.8-arch1-1)
Why journalctl don't save my logs of the boot before forced shutdown?
I finally found this is done with a kernel feature called vga arbiter. Whichever VGA adapter is used as primary by the BIOS ends up being flagged a the "bootvga" device. Its possible to force vga arbiter to select and use the next vga adapter by using the stub driver for the undesired VGA adapter. Retrieve the pci device id using lspci -nn | grep VGA Add this parameter to your kernel command line pci-stub.ids=0000:0000 When I move to RHEL 7, I'll be doing pci passthrough with this disabled adapter. Remember Nvidia graphics cards also include an audio device and end up in the same IOMMU group as the VGA device. Both pci device ids will need to be stubbed.
I have an HP DL380G9 server with two discreet nvidia graphics card installed running RHEL 6 with Kernel 2.6.32-573. Both cards have the same chipset (NV117) but different models. K620 (Slot 5 address 88:00.0), K2200 (Slot 4 address 84:00.0). The K2200 is the selected card for Linux to output plymouth and boot messages. Swapping the cards results in the HP Server BIOS hitting a page fault, even after clearing CMOS and BIOS settings. Swapping the cards back fixes the issue. There is no option in the BIOS to select a primary discreet graphics card. Linux appears to select the graphics card with the lowest PCI Bus Address. Is there a kernel command line option or some other configuration file to select a different graphics card for the default pre-X11 display?
Select graphics card for console output
Sysctl parameters can be set via the kernel command-line starting with kernel version 5.8, thanks to Vlastimil Babka from SUSE. sysctl.*= [KNL] Set a sysctl parameter, right before loading the init process, as if the value was written to the respective /proc/sys/... file. Both '.' and '/' are recognized as separators. Unrecognized parameters and invalid values are reported in the kernel log. Sysctls registered later by a loaded module cannot be set this way. Example: sysctl.vm.swappiness=40
Is it possible to set Linux kernel sysctl settings (those usually set in /etc/sysctl.d) using kernel command line (those visible in /proc/cmdline)? (Using grub config file /etc/default/grub variable GRUB_CMDLINE_LINUX="...".)
How to set sysctl using kernel command line parameter?
Yes: echo N | sudo tee /sys/module/printk/parameters/console_suspend
Is it possible to change this value in runtime without rebooting? I don't always have this problem, when I suspend right now I'm getting a failure andSuspending console(s) (use no_console_suspend to debug)I would like to debug now, without having to reboot and recreate the problem.
Can no_console_suspend be set in runtime?
It depends a little on the distribution you are using and what components are included by dracut in the initramfs. For example, the cryptdevice= option is interpreted by the encrypt hook. Thus, it's only relevant for initramfs images that include this hook. The disadvantage of rd.luks.allow-discards and rd.luks.allow-discards= is that it simply doesn't work. The dracut.cmdline(7) description of these options is incorrect. I tested it under Fedora 26 where it doesn't work and there is even a bug report for Fedora 19 where this deviation between documented and actual behavior was discussed and it was closed as wont-fix. The luks.options= and rd.luks.options= are more generic as you basically can place any valid crypttab option in there, e.g. discard. Since they are interpreted by systemd-cryptsetup-generator which doesn't care about cryptdevice= you can't expect a useful interaction between these options. Note that luks.options= only has an effect for devices that aren't listed in the initramfs image's etc/crypttab file. Thus, to enable dm-crypt pass-though SSD trim support (a.k.a. discard) for dm-crypted devices opened during boot you have 2 options:add rd.luks.options=discard to the kernel command line and make sure that the initramfs image doesn't include a etc/crypttab add the discard option to the relevant entries in /etc/crypttab and make sure that the current version is included in the initramfs image.You can use lsinitrd /path/to/initramfs etc/crypttab for checking the initramfs image, dracut -v -f /path/to/initramfs-image for regenerating the image after changes to /etc and dmsetup table to see whether the crypted device was actually opened with the discard option (the relevant entries should include the string allow_discards then).
I'm confused between the various ways that LUKS/dmcrypt/cryptsetup discard /TRIM operations can be enabled via the Linux kernel command line.The dracut manpage:rd.luks.allow-discards Allow using of discards (TRIM) requests on all LUKS partitions.The systemd-cryptsetup-generator manpageluks.options=, rd.luks.options= ... If only a list of options, without an UUID, is specified, they apply to any UUIDs not specified elsewhere, and without an entry in /etc/crypttab. ...The argument rd.luks.options=discard is recommended here. The Arch wiki section on LUKS and SSDs shows a third colon-seprated field:cryptdevice=/dev/sdaX:root:allow-discardsQuestions:What is the difference between discard and allow-discards? Is the former mandatory and the second optional? Will luks.options= or rd.luks.options= apply given cryptdevice=/dev/sda2 (eg not a UUID)? What if cryptdevice= is given a UUID, does that count as "specified elsewhere"? Will luks.options= or rd.luks.options= overwrite / append / prepend if cryptsetup= already gives options? Is there any disadvantage to using rd.luks.allow-discards which seems to be simplest if TRIM is wanted everywhere?
LUKS discard/TRIM: conflicting kernel command line options
Indeed, both parameters affect IP forwarding at a broad level. However, net.ipv4.conf.all.forwarding is meant to provide more granular control within network namespaces when dealing with complex networking scenarios where you want to enable or disable forwarding for specific namespaces while leaving others unaffected.
According to https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt conf/all/* is special, changes the settings for all interfacesforwarding - BOOLEAN Enable IP forwarding on this interface. This controls whether packets received _on_ this interface can be forwarded.ip_forward - BOOLEAN 0 - disabled (default) not 0 - enabled Forward Packets between interfaces. This variable is special, its change resets all configuration parameters to their default state (RFC1122 for hosts, RFC1812 for routers)So, net.ipv4.conf.all.forwarding=0 disables the IPv4 packets forwarding on all interfaces, same as net.ipv4.ip_forward=0 disables the IPv4 packet forwarding on all interfaces. Can anyone, please, explain what's the difference between net.ipv4.conf.all.forwarding and net.ipv4.ip_forward kernel params?
Difference between net.ipv4.conf.all.forwarding and net.ipv4.ip_forward
As far as I know, you can use modprobe to adjust parameters only when the feature in question has been compiled as a module - and you're loading the module in the first place. For setting module parameters persistently, you'll have the /etc/modprobe.d directory. (Generally you should leave /usr/lib/modprobe.d for distribution's default settings - any files in there may get overwritten by package updates.) If the module in question has been built into the main kernel, then you must use the <module_name>.<parameter_name>=<value> syntax, typically as a boot option. If the parameter in question is available as a sysctl setting, then you can use the sysctl -w command to adjust it too. All the available sysctl parameters are presented under /proc/sys: for example, kernel.domainname is at /proc/sys/kernel/domainname. Not all module parameters are available as sysctls, but some might be. If a loadable module has already been loaded, and you wish to change its parameters immediately without unloading it, then you can write the new value to /sys/module/<module_name>/parameters/<parameter_name>. If the module cannot accept dynamic reconfiguration for that parameter, the file will be read-only. At least on my system, kernel.domainname is a sysctl parameter for the main kernel, and trying to change it with modprobe won't work: # sysctl kernel.domainname kernel.domainname = (none) # modprobe kernel domainname="example.com" modprobe: FATAL: Module kernel not found in directory /lib/modules/<kernel_version> # sysctl kernel.domainname kernel.domainname = (none)In a nutshell: If you are unsure, first look into /proc/sys or the output of sysctl -a: if the parameter you're looking for is not there, it is not a sysctl parameter and is probably a module parameter (or the module that would provide the sysctl is not currently loaded, in which case it's better to set the value as a module parameter anyway - trying to set a sysctl belonging to a module that is not currently loaded will just produce an error). Then, find out which module the parameter belongs to. If the module is built into the kernel, you'll probably have to use a boot option; if it is loadable with modprobe (i.e. the respective <module>.ko file exists somewhere in the /lib/modules/<kernel version>/ directory tree), then you can use modprobe and/or /etc/modprobe.d/.
We know that sysctl command can change kernel parameters with : # sysctl -w kernel.domainname="example.com"or by directly editing the file in /proc/sys directory. And for persistent changes, the parameters must be written to /etc/sysctl.d/<moduleName>.conf files as: # echo kernel.domainname="example.com" > /etc/sysctl.d/domainname.confHowever, we can also change the kernel parameters using the modprobe command: # modprobe kernel domainname="example.com"And then there's the modprobe.conf file in the /etc/modprobe.d directories, which is present in multiple locations: /etc/modprobe.d and /usr/lib/modprobe.d. It contains multiple .conf files, and the options can be provided in the appropriate conf file for the module as: options kernel domainname="example.com"So, what's the difference between each of these methods? Which method should be used under what specific circumstances?
Difference between modprobe and sysctl -w in terms of setting system parameters?
You've probably forgotten to include the required cryptdevice mapped name in the kernel command line parameter. I had: cryptdevice=/dev/sdaX However, the second colon-separated field is mandatory, eg: cryptdevice=/dev/sdaX:root If you're using an SSD, and have understood the implications, for increased performance you may want to use: cryptdevice=/dev/sdaX:root:allow-discards
At boot I see: :: running hook [encrypt]A password is required to access the volume: Command requires device and mapped name as arguments Command requires device and mapped name as arguments Command requires device and mapped name as argumentsThe final message repeats every second. There is no opportunity for me to enter a password. I am running Manjaro, based upon Arch. What am I doing wrong?
LUKS password not being requested by dmcrypt / encrypt hook at boot
A module reference count of -1, visible both in /sys/module/<module>/refcnt and in lsmod’s output, means that the module is currently unloading. If a module’s reference count stays at -1, that indicates a problem — dmesg should tell you more.
In this post there is some explanation about why does lsmod show -2 in 'used by' column. The idea is that the kernel config option CONFIG_MODULE_UNLOAD was not set. But what if lsmod shows -1 only for one specific module while CONFIG_MODULE_UNLOAD is set in my current kernel? How to debug this muddle?
lsmod 'used by' shows -1 while CONFIG_MODULE_UNLOAD=y
One, arguably silly, idea that comes to mind is to see if you can pull the kernel's symbol table from the image or from /proc/kallsyms or somewhere, and reverse engineer at least the included drivers based on that. Though with something like 35000 symbols shown by kallsyms on a stock distribution kernel, that would require some scripting.
I'm currently trying to rebuild the kernel for a proprietary device. In order to do this I will need to produce a kernel config for the device. While I could likely do this through trial and error, it would be better to see if I can extract the config from the running host. That being said the running kernel was not compiled with CONFIG_IKCONFIG (and thus not CONFIG_IKCONFIG_PROC either). This means that there is no /proc/config.gz to extract. In addition, they didn't bother to package the config in /boot either. Thus, the two common places where a kernel config is generally stored are out of luck. Most everything was compiled statically into this kernel: # cat /proc/modules linux_user_bde 12327 0 - Live 0xf8536000 (PO) linux_kernel_bde 29225 1 linux_user_bde, Live 0xf8524000 (PO) pciDrv 1448 0 - Live 0xf8510000 (O) iTCO_wdt 4456 0 - Live 0xf83fb000 iTCO_vendor_support 2003 1 iTCO_wdt, Live 0xf83f7000 i2c_dev 5443 0 - Live 0xf83f2000 i2c_i801 9421 0 - Live 0xf83eb000 i2c_core 20859 3 i2cscan,i2c_dev,i2c_i801, Live 0xf83e0000 igb 148294 0 - Live 0xf83ae000 (O) dca 4665 0 - Live 0xf804c000 # ls -l /proc/conf* ls: /proc/conf*: No such file or directory # find /boot/ -name "conf*" # modprobe configs modprobe: module 'configs' not found #
What are methods for recovering a Linux Kernel config?
I'm sure someone is still doing this, but back in the days before stuff like ILO/DRAC/etc. became cheap and ubiquitous, the best way to get "out of band" access to the console in case of emergencies or an oops was over the serial port. You would mount a Terminal Server in the rack, then run cables to the serial port of your servers. Some BIOSs supported console redirection to the serial port (for example VA Linux and SuperMicro servers in the 1999+ timeframe). The 8250/16550 UARTS were some of the most popular serial port chips at the time, meaning that they would be the best supported under Linux, and all of them used the 8250 kernel driver (there were many more models in that series that all used the same driver). I suspect that a lot of SoC designs intended to be used by linux built 8250/16550 compatible UARTs into them because it was the easy button--well documented and a well tested driver. Although hopefully they built the later multibyte buffer versions (of course even "slow" processors by todays standards can service a serial interrupt far more often than a 115k serial port can receive it). IIRC the Mac had a serial port used for Local/Apple Talk (Can't remember which was the protocol and which was the hardware) that did 248k. Still, that was back when CPUs did 60Mhz. This is probably the best answer for the difference between MMIO and Port I/O: https://en.wikipedia.org/wiki/Memory-mapped_I/O I don't understand that level well enough to boil it down. The above link will probably answer what is for these purposes, but basically it's a memory address.
I first found this by investigating parameters for earlycon but found that the options for console look almost identical. Both are present below and were taken from this source: From documentation for console we have: console= [KNL] Output console device and options. tty<n> Use the virtual console device <n>. ttyS<n>[,options] ttyUSB0[,options] Use the specified serial port. The options are of the form "bbbbpnf", where "bbbb" is the baud rate, "p" is parity ("n", "o", or "e"), "n" is number of bits, and "f" is flow control ("r" for RTS or omit it). Default is "9600n8". See Documentation/serial-console.txt for more information. See Documentation/networking/netconsole.txt for an alternative. uart[8250],io,<addr>[,options] uart[8250],mmio,<addr>[,options] uart[8250],mmio16,<addr>[,options] uart[8250],mmio32,<addr>[,options] uart[8250],0x<addr>[,options] Start an early, polled-mode console on the 8250/16550 UART at the specified I/O port or MMIO address, switching to the matching ttyS device later. MMIO inter-register address stride is either 8-bit (mmio), 16-bit (mmio16), or 32-bit (mmio32). If none of [io|mmio|mmio16|mmio32], <addr> is assumed to be equivalent to 'mmio'. 'options' are specified in the same format described for ttyS above; if unspecified, the h/w is not re-initialized. hvc<n> Use the hypervisor console device <n>. This is for both Xen and PowerPC hypervisors. If the device connected to the port is not a TTY but a braille device, prepend "brl," before the device type, for instance console=brl,ttyS0 For now, only VisioBraille is supported.From documentation for earlycon we have: earlycon= [KNL] Output early console device and options. When used with no options, the early console is determined by the stdout-path property in device tree's chosen node. cdns,<addr> Start an early, polled-mode console on a cadence serial port at the specified address. The cadence serial port must already be setup and configured. Options are not yet supported. uart[8250],io,<addr>[,options] uart[8250],mmio,<addr>[,options] uart[8250],mmio32,<addr>[,options] uart[8250],mmio32be,<addr>[,options] uart[8250],0x<addr>[,options] Start an early, polled-mode console on the 8250/16550 UART at the specified I/O port or MMIO address. MMIO inter-register address stride is either 8-bit (mmio) or 32-bit (mmio32 or mmio32be). If none of [io|mmio|mmio32|mmio32be], <addr> is assumed to be equivalent to 'mmio'. 'options' are specified in the same format described for "console=ttyS<n>"; if unspecified, the h/w is not initialized. pl011,<addr> pl011,mmio32,<addr> Start an early, polled-mode console on a pl011 serial port at the specified address. The pl011 serial port must already be setup and configured. Options are not yet supported. If 'mmio32' is specified, then only the driver will use only 32-bit accessors to read/write the device registers. meson,<addr> Start an early, polled-mode console on a meson serial port at the specified address. The serial port must already be setup and configured. Options are not yet supported. msm_serial,<addr> Start an early, polled-mode console on an msm serial port at the specified address. The serial port must already be setup and configured. Options are not yet supported. msm_serial_dm,<addr> Start an early, polled-mode console on an msm serial dm port at the specified address. The serial port must already be setup and configured. Options are not yet supported. smh Use ARM semihosting calls for early console. s3c2410,<addr> s3c2412,<addr> s3c2440,<addr> s3c6400,<addr> s5pv210,<addr> exynos4210,<addr> Use early console provided by serial driver available on Samsung SoCs, requires selecting proper type and a correct base address of the selected UART port. The serial port must already be setup and configured. Options are not yet supported. lpuart,<addr> lpuart32,<addr> Use early console provided by Freescale LP UART driver found on Freescale Vybrid and QorIQ LS1021A processors. A valid base address must be provided, and the serial port must already be setup and configured. armada3700_uart,<addr> Start an early, polled-mode console on the Armada 3700 serial port at the specified address. The serial port must already be setup and configured. Options are not yet supported.An example of the usage is: earlycon=uart8250,0x21c0500 My questions are: Why is there a reference to the 8250/16550 physical hardware? Has this old implementation molded into an interface specification for modern designs? That is, are we still using the drivers for UART that were compatible when these comms devices were external to the SoC? If MMIO is Memory Mapped IO, what is "normal" IO referring to in this context? What is the <addr> parameter? Is this the beginning of the UART configuration registers for the specific SoC you are running this kernel on? Do most UART configuration registers conform to a specific register layout such that a generic UART driver may appropriately configure the hardware?
Kernel parameters "console" and "earlycon" refer to old hardware?
If CONFIG_KALLSYMS is enabled, built-in drivers can be disabled by disabling their init function. For uvcvideo (which is likely to be the driver used for your webcam), add initcall_blacklist=uvc_video_initto your kernel’s command line. If it isn’t, you won’t be able to disable only your webcam using kernel command line parameters, but you can control your webcam at run-time; find its entry in /sys/bus/usb/devices, and write 0 to the corresponding authorized file, e.g. echo 0 | sudo tee /sys/bus/usb/devices/1-8/authorizedWrite 1 to enable the camera again. You can use USBGuard to provide control over all your USB devices, including your webcam.
I have an internal webcam on my Dell laptop. I don't see it listed with lspci, but it works. I am using a self-compiled kernel, and here are the options I have enabled: # zcat /proc/config.gz | grep -v '^#' | egrep '(MEDIA|VIDEO)' CONFIG_ACPI_VIDEO=y CONFIG_MEDIA_SUPPORT=y CONFIG_MEDIA_SUPPORT_FILTER=y CONFIG_MEDIA_CAMERA_SUPPORT=y CONFIG_VIDEO_DEV=y CONFIG_MEDIA_CONTROLLER=y CONFIG_VIDEO_V4L2=y CONFIG_VIDEO_V4L2_I2C=y CONFIG_MEDIA_USB_SUPPORT=y CONFIG_USB_VIDEO_CLASS=y CONFIG_VIDEOBUF2_CORE=y CONFIG_VIDEOBUF2_V4L2=y CONFIG_VIDEOBUF2_MEMOPS=y CONFIG_VIDEOBUF2_VMALLOC=y CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER=yAll options in my kernel are compiled statically, and I am not using loadable modules. How can I disable the webcam at boot time, by passing/appending something to the kernel boot options? I would like to decide at boot time whether I want to boot the kernel with webcam support, or without.
Disable webcam at boot time, by appending a boot parameter
There's no automatic database relating sysctl variables to modules. You can search the module binary and hope that the variable name isn't found in other strings (this one isn't). Search for the last part, i.e. bridge-nf-call-iptables —the full string isn't present in the binary, it's constructed dynamically. grep -rl bridge-nf-call-iptables /lib/modules/`uname -r`Alternatively, you can check the documentation —but it doesn't always tell you, and in this case it doesn't say. So you're left with the source code. First look for the string (again, only the last part); in recent kernels it's in net/bridge/br_netfilter_hooks.c. Now check the makefile in the same directory to see how this source file is built. The relevant line is br_netfilter-y := br_netfilter_hooks.owhich means that if the br_netfilter module is built then it contains the code from br_netfilter_hooks.c, thus the br_netfilter module is the one you need.
This is on the Linux OS. I see that on some machines the net.bridge.bridge-nf-call-iptables variable doesn't exist until I modprobe the br_netfilter filter. I also see that there are some machines where the bridge module is loaded and that itself brings in this variable. Is there a way to know which module I should load in order to get a particular variable ?
Kernel module for net.bridge.bridge-nf-call-iptables
taskset will call sched_setaffinity which will fail ifThe affinity bit mask mask contains no processors that are currently physically on the system and permitted to the thread according to any restrictions that may be imposed by cpuset cgroups or the "cpuset" mechanism described in cpuset(7).As for why is the behaviour different when resorting to the isolcpus= boot parameter, please note in your kernel documentation (Documentation/admin-guide/kernel-parameters.txt) that this can be overidden at runtime by the CPU affinity syscallsYou can move a process onto or off an "isolated" CPU via the CPU affinity syscalls or cpuset.Contrarily, man cpuset will tell that cpuset placement is enforced in case of conflicting setting from sched_setaffinity.Cpusets are integrated with the sched_setaffinity(2) scheduling affinity mechanism and the mbind(2) and set_mempolicy(2) memory-placement mechanisms in the kernel. Neither of these mechanisms let a process make use of a CPU or memory node that is not allowed by that process's cpuset. If changes to a process's cpuset placement conflict with these other mechanisms, then cpuset placement is enforced even if it means overriding these other mechanisms. The kernel accomplishes this overriding by silently restricting the CPUs and memory nodes requested by these other mechanisms to those allowed by the invoking process's cpuset. This can result in these other calls returning an error, if for example, such a call ends up requesting an empty set of CPUs or memory nodes, after that request is restricted to the invoking process's cpuset.BTW : Please also note (in the kernel documentation) that the isolcpus= boot parameter is deprecated.
First, let me give a background of what I am trying to achieve. I know how to isolate a particular CPU using boot param (isolcpu and nohz_full; the housekeeping subsystem setup). But as per my requirement, I need to isolate the CPU after the system has booted up. So, as per many articles, I tried to isolate a particular CPU using cpuset subsystem as follows: I am using a hardware having 16 cpus. (0-15). So, I decided to isolate CPU 0. $ cd /cpusets $ mkdir housekeeping $ mkdir isolate $ echo 1-15 > housekeeping/cpus $ echo 0 > mems $ echo 0 > isolated/cpus $ echo 0 > isolated/mens $ echo 0 > cpuset.sched_load_balance $ echo 0 > isolated/sched_load_balance $ while read P ; do echo $P > housekeeping/tasks ; done < tasksThis isolates the processor 0 from all the other processors. But when I tried to assign a process to processor 0 using taskset as follows: /******loop.c**********/ int main(){ int i; for(i=0;;i++); return 0; }$ gcc -o loop.c loop $ taskset -c 0 ./loop taskset: failed to set pid 2755250's affinity: Invalid argument Apart from echoing pid 2755250 to isolated/tasks, is it possible to set the affinity of a new process to the isolated CPU 0? Where am I making a mistake?
Set affinity of a process using TASKSET or sched_setaffinity() to a processor core isolated using CPUSET
There is kernel boot parameter sysrq_always_enabled according to the doc: sysrq_always_enabled [KNL] Ignore sysrq setting - this boot parameter will neutralize any effect of /proc/sys/kernel/sysrq. Useful for debugging.I have tested Sysrq to work(eg. help,sync) even when kernel.sysctl=0 (so it's just as the doc above says), if I add kernel boot argument sysrq_always_enabled (note: it doesn't have to be sysrq_always_enabled=1). (to double check, I've also tested Sysrq to NOT work when kernel.sysctl=0 and sysrq_always_enabled is NOT present in /proc/cmdline) Source code confirms it too. Note: /proc/sys/kernel/sysrq is another way to read/write kernel.sysrq For more info on sysrq: https://www.kernel.org/doc/html/v4.15/admin-guide/sysrq.html (though sysrq_always_enabled is not mentioned there, but it is here) EDIT: When sysrq_always_enabled is in effect, there is a dmesg line: [ 0.000000] sysrq: sysrq always enabled.
Some Linux distributions have kernel.sysrq=16 which means only SysRq + s (sync) is allowed. As an example: Fedora (25 and 28) has it set as such in /usr/lib/sysctl.d/50-default.conf I had to create a file as /etc/sysctl.d/95-sysrq.conf where I manually set kernel.sysrq=1 so it's available as soon as possible (but possibly not soon enough depending on situation): $ grep -nHi sysrq /usr/lib/sysctl.d/* /etc/sysctl.d/* /usr/lib/sysctl.d/50-default.conf:16:# Use kernel.sysrq = 1 to allow all keys. /usr/lib/sysctl.d/50-default.conf:17:# See http://fedoraproject.org/wiki/QA/Sysrq for a list of values and keys. /usr/lib/sysctl.d/50-default.conf:18:kernel.sysrq = 16 /etc/sysctl.d/95-sysrq.conf:1:kernel.sysrq=1Is there a way to enable Sysrq from early boot, possibly also ignoring any setting for kernel.sysrq?, for example adding a kernel boot parameter (e.g., cat /proc/cmdline for current ones) such as from the Grub boot menu (or in xen.cfg's kernel= line).
How to ensure SysRq is always enabled regardless of the kernel.sysrq setting?
It may be stuck during loading kernel or initrd, try adding insmod progress line before linux and something like echo 'Loading linux...' and echo 'Loading initrd...' before linux and initrd lines. Also consider adding tsc=unstable kernel parameter, it may fix delay before booting, see following question: kernel boot logging causes delay The most verbose option should be ignore_loglevel , see kernel parameters list :ignore_loglevel [KNL] Ignore loglevel setting - this will print /all/ kernel messages to the console. Useful for debugging. We also add it as printk module parameter, so users could change it dynamically, usually by /sys/module/printk/parameters/ignore_loglevel.It should look like this: [some lines here, setting root, etc.] insmod progress echo 'Loading linux...' linux /path/to/linux root=[your root] ro ignore_loglevel tsc=unstable echo 'Loading initrd...' initrd /path/to/initrdIs drive led blinking? Also notice size displayed when loading (with insmod progress) should be a few MiB for the kernel and from a few MiB to around 100 for initrd. If you installed linux on usb device and booting it on old laptop (or computer) that supports booting from usb, it may be loading with speed of around 60 KiB/s or even slower so it appears to be stuck (ie. with big initrd). It is the case for IBM ThinkPad type 2647.
I have an error when booting on a machine, my machine is stuck with the error message: Booting a command list. I can see this message when I add loglevel=7 and only if I delete the quiet in the kernel parameters via grub boot loader. Is there a way to make the boot even more verbose than loglevel=7 using kernel parameters?
Can I make the boot even more verbose than loglevel=7 using kernel parameters?
So one of the big things about learning to Unix is reading the bloody man page: I'm not just being a get off my lawn grumpy old man, there REALLY IS valuable information in there. In this case:DESCRIPTION sysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. Procfs is required for sysctl support in Linux. You can use sysctl to both read and write sysctl data.So we can: $sudo sysctl -a | grep kernel.perf_event_max_sample_rate kernel.perf_event_max_sample_rate = 50000 sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" sysctl: reading key "net.ipv6.conf.enp3s0.stable_secret" sysctl: reading key "net.ipv6.conf.lo.stable_secret" sysctl: reading key "net.ipv6.conf.wlp1s0.stable_secret"By reading the manpage we learn that -a is "display all values currently available", but we also can see:SYNOPSIS sysctl [options] [variable[=value]] [...] sysctl -p [file or regexp] [...]which means we can shorten the above command to: $ sudo sysctl kernel.perf_event_max_sample_rate kernel.perf_event_max_sample_rate = 50000 Or we can: $ more /proc/sys/kernel/perf_event_max_sample_rate 50000So, TL;DR:Yes, you can write a script to log this variable every few minutes, but if it's going to show up in the logs when it changes, why would you?It would probably be more efficient to read the value right out of /proc/sys/kernel/perf_event_max_sample_rate than to use sysctl, and it would be more efficient to ask for the specific value from sysctl than to use grep.
I saw in my syslog kernel.perf_event_max_sample_rate get changed. I was wondering if I could write a quick script to log this variable every few minutes. Currently it is: sysctl -a | grep kernel.perf_event_max_sample_rateIn the man page sysctl sayssysctl - configure kernel parameters at runtimeDoes that mean that my script would get the parameter as it was set when the kernel starts? Would it pick up changes?
View current kernel parameters?
While I didn't really find an answer as to why, since I can't grok that GRUB source code, I'm quite certain now that GRUB 2's badram command is just broken for 64 bit address space. So this information is for anyone else, who may be going down this rabbit hole. (tl;dr: badram is unusable! Use Linux' memmap= kernel pattern instead.) After applying a grub commandline of badram 0xac4d96c0,0xfffffff8, what I am seeing in Linux' e820 memory map is fragmentation like this (sorry, SE doesn't have colored diff hilighting): --- a/e820 +++ b/e820 @@ -1,5 +1,7 @@ [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable -[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bd6f5fff] usable +[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000000ac4d9000-0x00000000ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000000ac4da000-0x00000000bd6f5fff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bd6f6000-0x00000000bd749fff] ACPI NVS [ 0.000000] BIOS-e820: [mem 0x00000000bd74a000-0x00000000bd751fff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000bd752000-0x00000000bd752fff] ACPI NVS @@ -28,4 +30,18 @@ [ 0.000000] BIOS-e820: [mem 0x00000000fed61000-0x00000000fed70fff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fed80000-0x00000000fed8ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fef00000-0x00000000ffffffff] reserved -[ 0.000000] BIOS-e820: [mem 0x0000000100001000-0x000000083effffff] usable +[ 0.000000] BIOS-e820: [mem 0x0000000100001000-0x00000001ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000001ac4d9000-0x00000001ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000001ac4da000-0x00000002ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000002ac4d9000-0x00000002ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000002ac4da000-0x00000003ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000003ac4d9000-0x00000003ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000003ac4da000-0x00000004ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000004ac4d9000-0x00000004ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000004ac4da000-0x00000005ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000005ac4d9000-0x00000005ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000005ac4da000-0x00000006ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000006ac4d9000-0x00000006ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000006ac4da000-0x00000007ac4d8fff] usable +[ 0.000000] BIOS-e820: [mem 0x00000007ac4d9000-0x00000007ac4d9fff] unusable +[ 0.000000] BIOS-e820: [mem 0x00000007ac4da000-0x000000083effffff] usableThe big chunk of usable memory got fragmented by lots of small regions, which I guess means the passed address mask of 0xfffffff8 that we gave GRUB is, quite logically, 0x00000000fffffff8 in 64-bit space. The kernel then goes on to log about mapping all these "unusable" holes out of system ram as RAM buffers, which should make the pattern even more obvious: [ 0.615775] e820: reserve RAM buffer [mem 0xac4d9000-0xafffffff] [ 0.615779] e820: reserve RAM buffer [mem 0x1ac4d9000-0x1afffffff] [ 0.615780] e820: reserve RAM buffer [mem 0x2ac4d9000-0x2afffffff] [ 0.615781] e820: reserve RAM buffer [mem 0x3ac4d9000-0x3afffffff] [ 0.615782] e820: reserve RAM buffer [mem 0x4ac4d9000-0x4afffffff] [ 0.615783] e820: reserve RAM buffer [mem 0x5ac4d9000-0x5afffffff] [ 0.615784] e820: reserve RAM buffer [mem 0x6ac4d9000-0x6afffffff] [ 0.615785] e820: reserve RAM buffer [mem 0x7ac4d9000-0x7afffffff]So we actually reserved the space from 0xac4d9000 to 0xac4d9fff... That's our 0xac4d96c0 badram address, aligned on the lower boundary and then punched out as a nice 4KB kernel-pagesize hole around our tiny bit error. (Imagine, if you want, someone getting rid of a spider on the wall by putting a hole around it with a cannonball. Clearly takes care of the problem, indeed.) ...and then we went on to punch holes reserve space of the same size at 0x1ac4d9000, 0x2ac4d9000, 0x3ac4d9000... up until we run out of system ram. (Now that looks like a cannonball launcher with some spray to it!) But all these are small holes, and the wall is big, so everything could be fine, indeed. However, what if our badram pattern is actually a lot larger than in this case? Or worse, Memtester gives us a whole bunch of places to avoid? Then this issue just turns our memory map into a sieve! So don't use GRUB badram on 64bit systems, unless you know what you're doing. Instead, a perfectly good replacement option on Linux is using the memmap= commandline parameter. Incidentially, that also allows you to punch smaller holes than the 4KB blocks GRUB would generate, and also works with other bootloaders like EFIStub. (Another option could be to somehow pass the kernel your own, modified, e820 memory map -- but if you know how to do that you'll probably not be reading this.)
When I add the "badram" pattern that 64bit Memtest86+ v6.10/v6.20 gave me, GRUB 2 hangs completely on boot.Q:Why is the badram pattern address different from the "Error Address" displayed (0x0ac... vs 0x62c...)? What is the reason for this apparent offset? Why does GRUB hang on passing a 64bit badram pattern?This is my GRUB... # grub-mkimage --version grub-mkimage (GRUB) 2.06-3~deb11u5sleeping on the job...Beyond the "Welcome to GRUB!" message, nothing. No reboot, no reaction to key inputs, no rescue shell. System completely "bricked" - I had to build a rescue USB UEFI boot stick to to recover from this. (Btw, no secure-boot hardware, no signed grub install, so no excuse.) Anyway. I don't have too much knowledge about system memory and really can't tell much from Memtests' hex numbers. But I don't think I can just trim the leading zeros and pass these like 32-bit numbers to GRUB, or can I? ...Some person on reddit seems to have done just that, but, like me, couldn't afterwards verify if those numbers actually worked as expected, and masked out the correct memory regions. Why might GRUB crap itself on this? Is this a bad mem region to mask? Is the region too small, should it be a certain size (like 4K page size) or alignment? Is GRUB badram just broken, perhaps? Or is the hardware? (I don't think so, but you never know with these ACPI tables, right?) In any case, I dug up quite a few instances of other people reporting the same problem with GRUB + 64bit addresses (clearly my GRUB is not the only lazy worker out there):Upon issuing this command (either via grub.cfg or interactively on the command line) my system hangs and becomes unresponsive. badram 0x000000008c4e0800,0xffffffffffffcfe0(They got no response from GRUB devs)GRUB_BADRAM="0x00000000b3a9feec,0xfffffffffffffffc" And after that change, I don't even get to Grub boot screen. When it's supposed to show up, computer just hangs and shows the black screen.(They didn't manage to fix it, either)I did all that, but the Computer that is perfectly fine and has no errors refused to boot after that GRUB_BADRAM= line addition. it never boots and gives no menu at al.(The GRUB badram argument failed on two different computers for them...) ... I can't tell if there might be any relation between these patterns that make them bad, or if GRUB badram just plain doesn't work with 64bit addresses, since I couldn't find any positive "works for me" reports. (Those all boiled down to people using Linux memmap= format or Linux memtest= kernel parameters, instead.) Finally, I found one more person who seems to have had success with badram... using 32bit address notations (on a 64bit machine) ? So I'm going to try that next.
GRUB hangs itself with 64bit Memtest86+ BadRAM pattern?
From the kernel documentation:The kernel parses parameters from the kernel command line up to --; if it doesn't recognize a parameter and it doesn't contain a ., the parameter gets passed to init: parameters with = go into init's environment, others are passed as command line arguments to init. Everything after -- is passed as an argument to init.This also applies to /init on an initramfs. In the source code, both the initramfs's /init and the final root's /sbin/init (or other locations) are invoked via run_init_process which uses the same arguments (apart from argument 0 which is the path to the executable). I can't find it stated in the documentation but kernel interfaces are stable so this won't change. Note that this does not apply to /linuxrc on an initrd. This one is invoked with no arguments, but with the same environment as /init and /sbin/init. It can mount the proc filesystem and read /proc/cmdline to see the kernel command line arguments.
Supposedly I passed the kernel a parameter that it doesn't understand, for example blabla or eat=cake, what would the kernel do with these unknown parameters, the traditional case would be passing any unknown parameter to init, in case if the the Linux kernel starts with early user space (initramfs) would it pass it to /init in initramfs?
What does the Linux kernel do with unknown kernel parameters?
I hope you found the answer but I found some information for this which could help. This kernel-mailing-list-discussion and this article are mentioning this issue and the explanation is that by setting the MRRS you are ensuring that the devices are not sending out read requests where the completion-packet-size (the answer) is bigger than the MPS of the device sending out the read request. If you ensure that, every node is able to have the MPS of the node above as its own MPS (or the highest supported by the device if it is lower than the MPS of the node above). So one node with a very low MPS cannot slow down the whole bus. This schematic from the discussion helped me a lot to understand the problem: normal: root (MPS=128) | ------------------ / \ bridge0 (MPS=128) bridge1 (MPS=128) / \ EP0 (MPS=128) EP1 (MPS=128)perf: root (MPS=256) | ------------------ / \ bridge0 (MPS=256) bridge1 (MPS=128) / \ EP0 (MPS=256) EP1 (MPS=128)Where every node is able to have a higher MPS than 128 bytes except the EP1.
Quoting the linux kernel documentation for boot parameters :pcie_bus_perf : Set device MPS to the largest allowable MPS based on its parent bus. Also set MRRS (Max Read Request Size) to the largest supported value (no larger than the MPS that the device or bus can support) for best performance.I fail to understand why the MRRS should not be larger than the MPS "for best performance". I mean if a device can do MPS=X and MRRS=4.X then read requests could, in numbers, be less hence the bus less busy compared to a MRRS=X situation, even if the satisfaction of the request needs to be split in 4. Would the split induce some significant overhead somewhere ? BTW, I know the concept of "fair sharing" and understand the impact of large MRRS on that sharing but I never understood fair sharing synonymous to best performance.
pcie_bus_perf : Understanding the capping of MRRS
Some/many (but not all) modern kernels add an offset to jiffies - it's a very large offset, basically it's 4294967295 - (300 * HZ) The 300 * HZ is a 5 minute offset so that the kernel always tests jiffy rollover So, for 300Hz that would be 4294877295 Subtracting that from the jiffies value, then dividing by HZ should produce the right result 4356505571 - 4294877295 = 6162827661628275 / 300 = 205427.587Which STILL doesn't match the values in the question However, in the comments, the OP says that after 90 seconds, jiffies is 4294904295 4294904295 - 4294877295 = 2700027000 / 300 = 90.000To put that into a simple formula uptime = (jiffies - (4294967295 - (300 * HZ))) / HZor uptime = (jiffies - 4294967295) / HZ - 300Note: All my linux systems use the offset - except my OpenWRT routers - despite having kernel version 5.4 in the latest release, the jiffies to uptime relationship is simply as the OP expected: uptime = jiffies / HZMost (all?) of this information was gleaned fromhttps://stackoverflow.com/a/63176716/10549313 - credit goes to @firo for that; and and https://stackoverflow.com/a/33612184/10549313 - credit goes to @ZanLynxHowever, adding here probably makes sense too
System uptime is stored in /proc/uptime. As you know, the Linux kernel has a jiffies variable which increments by each timer interrupt specified by the HZ parameter. I got the value ofHZ by the following command: $ zcat /proc/config.gz | grep CONFIG_HZ=CONFIG_HZ=300In my machine, it's equal to 300. So I divided the jiffies given by /proc/timer_list by this number. # cat /proc/timer_list | grep -E "^jiffies" | head -n1 && cat /proc/uptime jiffies: 4356505571 516409.13 1432145.01I was assumed to get the same number but it's remarkably different. I mean 4356505571/300=14521685.23 should be really close to 516409.13, but it's not! Is there any idea behind jiffies that I am not aware of?
Why jiffies/HZ does not match uptime?
The correct quoting / escaping format to use is: GRUB_CMDLINE_LINUX="... acpi_osi=\"Windows 2015\" ... "Then /proc/cmdline will contain: "acpi_osi=Windows 2015"
How do I escape the string: acpi_osi="Windows 2015"To keep the space and quotes in GRUB's GRUB_CMDLINE_LINUX? [Alternate search term: acpi_os_name=]
GRUB: escape acpi_osi="Windows 2015" in GRUB_CMDLINE_LINUX
Are you using virtual machines or a hypervisor? If yes so, you should update your hypervisor host to the latest version so it can support the kernel version. CPUFreq stands for CPU frequency scaling which enables the operating system to scale the CPU frequency up or down in order to save power. I'm not sure why you're getting this error since there may be many possible reasons, but if you're using a hypervisor host - such as ESXi - and your OSes are working fine after the boot and you're only getting this error whilst the boot time, you need to update your Hypervisor host since it does not fully support the newly upgraded kernel version. If you're getting the same error on the latest version of hypervisors, or if you're not using virtual machines and it's happening on your primary OS, you need to check if your hardware is working fine or not. But this is not CentOS or RHEL problem.
After upgrading my CentOS 7 kernel from 3.10.0 to 4.8.7, while rebooting the system I will get the following lines: [ 0.641455] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 0 (-19)[ 0.641734] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 1 (-19)[ 0.641873] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 2 (-19)[ 0.641956] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 3 (-19)[ 0.642048] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 4 (-19)[ 0.642048] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 5 (-19)[ 0.984906] sd 0:0:0:0: [sda] Assuming drive cache: write throughWhat is the failed policies and how should I fix it?
Kernel 4.8.7 failure on cpufreq - CentOS 7
Nope, they're absolutely the same. The following is a quote from Linux Kernel in a Nutshell by Greg Kroah-Hartmandebug Enable kernel debugging. Cause the kernel log level to be set to the debug level, so that all debug messages will be printed to the console at boot time.quiet Disable all log messages. Set the default kernel log level to KERN_WARNING (4), which suppresses all messages during boot except extremely serious ones. (Log levels are defined under the loglevel parameter.)
Besides the possibility of userspace applications that care which one you use reading /proc/cmdline, what is the difference between using the kernel parameter quiet, versus loglevel=4, and the parameter debug, versus loglevel=7? Is there any?
Kernel parameter quiet versus loglevel=4 and debug versus loglevel=7
I asked the kernel developers the same question and am posting the answer here if anyone else has the same question (relevant kernel mail thread). There is no kernel boot option or config option to enable or disable IOMMU SW bounce buffers. SW bounce buffers are internally enabled for any untrusted PCI device. Linux kernel considers devices/interfaces with external-facing ports as untrusted (more accurate details can be found in kernel patches and kernel src itself, e.g. - https://patchwork.kernel.org/project/linux-pci/patch/[emailprotected]/).
I am trying to figure out how to disable bounce buffers used in IOMMU when the hardware IOMMU is used. To give more context, when IOMMU_DEFAULT_DMA_STRICT is set in the kernel it enables strict IOTLB invalidations on page unmap. Also, it uses an "additional layer of bounce-buffering". Reference: config IOMMU_DEFAULT_DMA_STRICT bool "Translated - Strict" help Trusted devices use translation to restrict their access to only DMA-mapped pages, with strict TLB invalidation on unmap. Equivalent to passing "iommu.passthrough=0 iommu.strict=1" on the command line. Untrusted devices always use this mode, with an additional layer of bounce-buffering such that they cannot gain access to any unrelated data within a mapped page.According to the description, this feature enables both strict IOTLB invalidations and bounce-buffer for untrusted PCI devices. My current understanding is this config option still uses hardware IOMMU and bounce-buffers together (please correct me if I am wrong). I want a way to enable/disable only the bounce buffers in IOMMU to find the performance overhead involved. In other words, I want to find the overhead of this "additional layer". Please let me know if there is a way to enable/disable only the SW bounce buffers when hardware IOMMU is used. What I tried so far: I noticed there are kernel command line options such as iommu=soft and swiotlb={force | noforce}. iommu=soft seems to be a replacement for the hardware IOMMU using software bounce-buffering (reference https://www.kernel.org/doc/Documentation/x86/x86_64/boot-options.txt) which is not what I want. I want to enable/disable bounce-buffering when it's used as an "additional layer" for the hardware IOMMU. swiotlb=force seems to be what I want as it forces all the IO operations through SW bounce buffers. However, it does not specify whether hardware IOMMU is still used or not. It would be great if someone can confirm this. If this is the case, to enable hardware IOMMU without SW bounce buffers I will use below kernel cmdline parameters: intel_iommu=on iommu=forceTo enable SW bounce buffers with HW IOMMU: intel_iommu=on iommu=force swiotlb=force
How to disable/enable bounce-buffers in IOMMU?
Reserved memory is memory the kernel cannot/should not use as regular memory, for whatever reason. Protected memory (sometimes also known as persistent memory) is memory that is guaranteed to keep its contents through a reboot or power loss, either because it is a Non-Volatile DIMM (NVDIMM) or for some other reason. e820 type 12 is tricky to interpret because in some systems pre-dating ACPI 6.0 it was used to indicate NVDIMMs, i.e. protected/persistent memory, but ACPI 6.0 defines that as type 7, and re-defines type 12 as "OEM reserved". Comments in kernel's arch/x86/include/asm/e820/types.h indicate some older systems also used type 6 to indicate protected/persistent memory, i.e. the early NVDIMM support was a vendor-specific mess until ACPI 6.0 provided new definitions (and re-defined the old ones to mean "don't touch this mess"). Linux apparently ignores the persistence capabilities of e820 type 12 memory unless the CONFIG_X86_PMEM_LEGACY=y is set.
The Linux kernel takes a memmap parameter *) to manually designate memory regions for different use-cases. Q: What is the difference between reserved memory (memmap=nn[KMG]$ss[KMG]) and protected memory (memmap=nn[KMG]!ss[KMG])? I.e. how are they treated by the kernel and when is either used? In regards to /proc/iomem, I believe reserved memory is listed as Reserved, and protected memory is listed as RAM buffer. Is that correct?*) I found no good linkable reference page that isn't a wall of text, so I'm appending a copy from the kernel docs: memmap=exactmap [KNL,X86] Enable setting of an exact E820 memory map, as specified by the user. Such memmap=exactmap lines can be constructed based on BIOS output or other requirements. See the memmap=nn@ss option description.memmap=nn[KMG]@ss[KMG] [KNL] Force usage of a specific region of memory. Region of memory to be used is from ss to ss+nn. If @ss[KMG] is omitted, it is equivalent to mem=nn[KMG], which limits max address to nn[KMG]. Multiple different regions can be specified, comma delimited. Example: memmap=100M@2G,100M#3G,1G!1024Gmemmap=nn[KMG]#ss[KMG] [KNL,ACPI] Mark specific memory as ACPI data. Region of memory to be marked is from ss to ss+nn.memmap=nn[KMG]$ss[KMG] [KNL,ACPI] Mark specific memory as reserved. Region of memory to be reserved is from ss to ss+nn. Example: Exclude memory from 0x18690000-0x1869ffff memmap=64K$0x18690000 or memmap=0x10000$0x18690000 Some bootloaders may need an escape character before '$', like Grub2, otherwise '$' and the following number will be eaten.memmap=nn[KMG]!ss[KMG] [KNL,X86] Mark specific memory as protected. Region of memory to be used, from ss to ss+nn. The memory region may be marked as e820 type 12 (0xc) and is NVDIMM or ADR memory.memmap=<size>%<offset>-<oldtype>+<newtype> [KNL,ACPI] Convert memory within the specified region from <oldtype> to <newtype>. If "-<oldtype>" is left out, the whole region will be marked as <newtype>, even if previously unavailable. If "+<newtype>" is left out, matching memory will be removed. Types are specified as e820 types, e.g., 1 = RAM, 2 = reserved, 3 = ACPI, 12 = PRAM.
Linux kernel difference between protected and reserved memory? (memmap parameter)
But is there a way for me to find this kernel documentation offline? Is it a package that I'd need to install?Yes, most distributions provide the kernel documentation for their kernel in a package. On Debian, this is linux-doc, which is a meta-package pulling in the default kernel’s documentation for whichever release you’re using (version-specific packages are available too, *e.g. linux-doc-4.19). On RHEL, CentOS etc. it’s kernel-doc. You’ll find the file you’re looking for in this case in /usr/share/doc/kernel-doc-*/Documentation/sysctl/kernel.txt on RHEL. In newer versions the file has been converted to ReSTructured text and can be found in .../Documentation/admin-guide/sysctl/kernel.rst (which is also where you can find the current kernel sysctl documentation on the kernel web site). Checking the packaged version gives you a better chance to get the documentation matching your running kernel; in some cases though the current documentation is more accurate, even for older kernels, and this is the case here — I improved the documentation following Linux Kernel.org misleading about kernel panic /proc/sys/kernel/panic, and that ended up in version 5.7 of the kernel.
The NOTES section of $ man 5 sysctl.conf states: The description of individual parameters can be found in the kernel documentation. But is there a way for me to find this kernel documentation offline? Is it a package that I'd need to install? For example, I came across the kernel.panic parameter, which on my system is set to 0 by default. Looking it up online here, it's described to be: panic:The value in this file represents the number of seconds the kernel waits before rebooting on a panic. When you use the software watchdog, the recommended setting is 60.But there is no reasonable way I'd guessed that the 0 there referred to 0 seconds till auto reboot without searching it up online.
Where to get offline documentation/descriptions of individual sysctl kernel tunable parameters?
You can't really change the kernel command-line after boot, but what you can do is reproduce the effects of setting or unsetting the quiet command-line through other means, which should accomplish what you want to achieve here. In short, to increase verbosity once you don't want quiet anymore, you can use this command: # echo 7 >/proc/sys/kernel/printkAnd to emulate what quiet does, this is what you can use: # echo 4 >/proc/sys/kernel/printkThis should take care of the kernel side of the setting... But sometimes userspace will also change behavior based on this kernel option. For instance, systemd will parse the quiet option in the kernel command-line, and act as if ShowStatus=auto was used in /etc/systemd/system.conf. If you want to revert that (to enforce the default and ignore the quiet option), edit that config file and uncomment the ShowStatus=yes line there, which should take care of it. There might be other systems in userspace that look at this option, so you might need to take a closer look at those to see how they behave and how to reproduce (or undo) the behavior of the option being present in the kernel command-line. The following is a deep dive into the sources to explain the behavior of the quiet option in the kernel and systemd.The kernel parses the quiet option by calling the quiet_kernel() initialization function, which does: static int __init quiet_kernel(char *str) { console_loglevel = CONSOLE_LOGLEVEL_QUIET; return 0; }early_param("quiet", quiet_kernel);The console_loglevel pseudo-variable is actually the first element of the console_printk array: extern int console_printk[];#define console_loglevel (console_printk[0])Log level "quiet" is defined as 4: #define CONSOLE_LOGLEVEL_QUIET 4 /* Shhh ..., when booted with "quiet" */A few lines below, the default log level is defined through a kernel config: /* * Default used to be hard-coded at 7, we're now allowing it to be set from * kernel config. */ #define CONSOLE_LOGLEVEL_DEFAULT CONFIG_CONSOLE_LOGLEVEL_DEFAULTAnd that kernel config is set in Kconfig.debug, still defaults to 7: config CONSOLE_LOGLEVEL_DEFAULT int "Default console loglevel (1-15)" range 1 15 default "7"(You might want to check that your kernel is using the default config, either in /boot/config-* or in /proc/config.gz.) And for more details on using /proc/sys/printk, see the kernel documentation for it. But, in short, it is possible to write only a single number, in which case only the first element of the array will be updated, which is what you want here.systemd will also parse the kernel command-line, looking for entries typically named systemd.*, but it turns out systemd also recognizes the quiet kernel command-line and uses it to set the ShowStatus: } else if (streq(key, "quiet") && !value) { if (arg_show_status == _SHOW_STATUS_UNSET) arg_show_status = SHOW_STATUS_AUTO;In this case, it will only set it if it wasn't previously set (_SHOW_STATUS_UNSET) and will set it to "auto" (SHOW_STATUS_AUTO.) Another way to set the ShowStatus is through the configuration file: { "Manager", "ShowStatus", config_parse_show_status, 0, &arg_show_status },This line describes the configuration option named ShowStatus= under the [Manager] section of system.conf. The parser for this option takes either the "auto" string (in which case it sets it to SHOW_STATUS_AUTO) or takes a boolean, which can be "yes", "true" or "1" to enable it, or "no", "false" or "0" to disable it. The systemd documentation for --show-status= is also pretty helpful here. It cites the ShowStatus= configuration too (since passing systemd command-line arguments directly is not always easy to do, updating a configuration file is definitely a more straightforward way to configure this setting.)I hope you find this helpful and that it helps you accomplish the right verbosity for your particular use case!
I am developing an embedded Linux device. I have successfully created an InitramFS CPIO archive that runs quickly after boot. Now, I want to change the initial kernel command line to include "quiet" parameter so I can boot even faster. However, once the splash screen is displayed in the InitramFS, I want to remove the quiet option for the kernel so the remainder of the boot is NOT quiet. How can I achieve this? How can I reverse the initial "quiet" kernel command line option once I've reached the InitramFS? Thanks.
Linux Modify/Add Kernel Command Line from InitramFS "UserSpace"
The log level was being set to a high value because of a kernel fault. I figured this out with the help of the System76 support team. The solution to my specific problem was to install the System76 ACPI DKMS driver, and info about the solution is now on the Arch Wiki. Information about printk being set to a high value (of 15) in the case of a kernel fault is mentioned on the following man page: $ man 2 syslog ... /proc/sys/kernel/printk /proc/sys/kernel/printk is a writable file containing four integer val‐ ues that influence kernel printk() behavior when printing or logging error messages. The four values are: console_loglevel Only messages with a log level lower than this value will be printed to the console. The default value for this field is DE‐ FAULT_CONSOLE_LOGLEVEL (7), but it is set to 4 if the kernel command line contains the word "quiet", 10 if the kernel command line contains the word "debug", and to 15 in case of a kernel fault (the 10 and 15 are just silly, and equivalent to 8). The value of console_loglevel can be set (to a value in the range 1–8) by a syslog() call with a type of 8. ...
I recently installed Arch Linux on a System76 Lemur Pro laptop. The installation seemed to complete successfully, but the console_loglevel is set to the very high value of 15. The following command allowed me to draw this conclusion: # cat /proc/sys/kernel/printk 15 4 1 4The high console_loglevel causes a flood of kernel messages to be printed to the console, which makes it barely usable. I figured out that I can temporarily change the console_loglevel by running # echo 4 > /proc/sys/kernel/printk. But I have so far been unable to permanently change the console_loglevel so it maintains its value after every boot. I've tried the following methods to permanently change it:creating a /etc/sysctl.d/20-quiet-printk.conf file with the contents kernel.printk = 4 4 1 4 and then running sysctl -p /etc/sysctl.d/20-quiet-printk.conf (ref1, ref2) creating a /etc/sysctl.conf file with the contents kernel.printk = 4 4 1 4 (ref1, ref2) adding quiet loglevel=3 to the GRUB_CMDLINE_LINUX_DEFAULT entry in /etc/default/grub, and regenerating the GRUB configuration file using grub-mkconfig -o /boot/grub/grub.cfg (ref1, ref2)Unfortunately, none of these methods worked, which leads me to believe that there is some other factor at play which is setting the console_loglevel to 15, and therefore overriding my settings above. How can I determine what is setting the console_loglevel?
What is setting the the console_loglevel at boot?
The Memory-Type Range Registers (MTRR) can control caching behaviour with respect to memory writes. In both your logs, no specific behaviour is enabled. If it was enabled, it would look like this (from an older system of mine): MTRR default type: uncachable MTRR fixed ranges enabled: 00000-9FFFF write-back A0000-EFFFF uncachable F0000-FFFFF write-protect MTRR variable ranges enabled: 0 base 000000000 mask FE0000000 write-back 1 base 020000000 mask FF8000000 write-back 2 disabled 3 disabled 4 disabled 5 disabled 6 disabled 7 disabledTypically this is only needed for older graphics cards, where it can influence performance. So your logs do not indicate abnormal behavior with respect to MTRRs. The only potential thing is pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.and it's impossible to say why this is there without seeing the rest of the logs, or poking around in your system: What this memory range is, where the MTRR override comes from, and if it would be suitable for huge-page mapping in the first place. So it's quite possible this is fine, as well, and it is some PCI card I/O space that just cannot have huge page-tables.
[Sun Mar 1 07:51:40 2020] MTRR default type: uncachable [Sun Mar 1 07:51:40 2020] MTRR fixed ranges enabled: [Sun Mar 1 07:51:40 2020] MTRR variable ranges enabled: [Sun Mar 1 07:51:40 2020] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.Noticed these messages just now, after I've rebooted the server several days ago. Might be relevant: enable_mtrr_cleanup found in kernel parameters, I quote:The kernel tries to adjust MTRR layout from continuous to discrete, to make X server driver able to add WB entry later. This parameter enables that.I don't understand the above, but I feel I should mention the hardware: It's an older piece from Dell, PowerEdge T20 with CPU (and iGPU) Intel Xeon E3-1225 v3 3.2GHz, 8MB cache, 4C/4T, full specs on Intel's Ark + it has 32 GB of DDR3 in UDIMM. All I managed to find on MTRR (Memory Type Range Register) is on Wikipedia, sadly I do not understand much of this either. Any hints in more or less layman terms? Should I even care for that dmesg message on my Debian 10?As opposed to the server above, here is relevant part mentioning MTRR on hardware being my newer laptop also from Dell, Inspiron 15, 32 GB of DDR4 in SO-DIMM: [Sat Mar 7 10:00:42 2020] MTRR default type: write-back [Sat Mar 7 10:00:42 2020] MTRR fixed ranges enabled: [Sat Mar 7 10:00:42 2020] MTRR variable ranges enabled:I can see little difference, maybe there is none in real-word application... maybe there is.
MTRR (Memory Type Range Register) in Debian 10 dmesg messages
It looks to me like KERN_DEBUG and lower will not get compiled in unless you set an appropriate flag in your kconfig. http://lxr.free-electrons.com/source/include/linux/printk.h?v=4.10#L280 I highly doubt the overhead of a function call and checking an if statement is an issue though and that is all the printk(KERN_DEBU ...) will run.
I noticed that kernel has different console levels for the printk. I also noticed this post [1]. I understand that we can change the /proc/sys/kernel/printk to change the printk level for the console; We can even use the dmesg --level to change the display level for dmesg. However, my question is: If I have a printk(KERN_DEBUG "debug message") line in the kernel, can I configure the system to advice kernel not to run the printk(KERN_DEBU ...) statement, instead of just not showing the message? I don't want the kernel to run this printk because printk will cause some performance overhead. Even though we don't see the message print out by dmesg, the kernel may still save it somewhere else, which may slow down the system a little bit (say several ms), which I want to avoid. Thank you very much for your time and help in this question! [1] can't filter printk messages
How to let kernel not run the printk with KERN_DEBUG
I finally could install debian buster fully unattended by using the following Kernel arguments: :d10-dc-node set base-url https://d-i.debian.org/daily-images/amd64/daily/netboot/debian- installer/amd64 kernel ${base-url}/linux initrd ${base-url}/initrd.gz initrd tftp://my.ipxe.server/preseed/debian_buster_node.seed /tmp/debian_buster_node.seed imgargs linux auto vga=normal root=/dev/ram rw file=/tmp/debian_buster_node.seed interface=eno1 fb=false debian-installer=en_US.UTF-8 locale=en_US.UTF-8 kbd-chooser/method=us auto-install/enable=true debconf/frontend=noninteractive priority=critical console-setup/ask_detect=false keyboard-configuration/xkb-keymap=us keyboard-configuration/modelcode=pc105 keyboard-configuration/layoutcode=us keyboard-configuration/variant=USA --- bootNetwork configuration and hostname will be set with DHCP. Thanks
I am trying to auto install debian buster with ipxe, it seems that the boot parameters don't work in the ipxe menu. I always get the language section. So the preseed isn't loaded. Here is the relevant entry in the ipxe menu: :d10-dc-node set base-url https://d-i.debian.org/daily-images/amd64/daily/netboot/debian-installer/amd64 kernel ${base-url}/linux initrd ${base-url}/initrd.gz imgargs linux vga=normal root=/dev/ram rw preseed/url=tftp://my.ipxe.server/preseed/debian_buster_node.seed netcfg/choose_interface=eno1 debian-installer/framebuffer=false debian-installer/locale=en_US kbd-chooser/method=us auto-install/enable=true debconf/frontend=noninteractive debconf/priority=critical console-setup/ask_detect=false keyboard-configuration/modelcode=pc105 keyboard-configuration/layoutcode=us keyboard-configuration/variant=USA hostname=ubuntu ---and here is the preseed part: ### Keyboard d-i console-setup/ask_detect boolean false d-i keyboard-configuration/layout select USA d-i keyboard-configuration/variant select USA d-i keyboard-configuration/modelcode string pc105 d-i keyboard-configuration/xkb-keymap select en d-i keyboard-configuration/layout string English### Locales d-i debian-installer/country string DE d-i debian-installer/language string en d-i debian-installer/locale string en_US.UTF-8 d-i localechooser/supported-locales multiselect en_US.UTF-8, de_DE.UTF-8When I try to set the base-url to http://ftp.de.debian.org/debian/dists/buster/main/installer-amd64/current/images/netboot/debian-installer/amd64 it works fine until loading the modules, then I get the following error: "No kernel modules were found", which I guess because of the different Kernel versions.
boot parameters seems not to work with ipxe and daily image
They are the same, the main difference is that kernel parameters can be set in three ways : 1- During the boot sequence GRUB via : a- The Grub interface during the boot. b- Configuration files with regenerating `grub.cfg` (permanent)2- During the RunTime via : sysctl command /proc/sys/* directory3- By configuring and compiling the kernel from source. Also don't confuse the GRUB properties with the ones on top, they are proper to GRUB , like : Change the boot order. Customize the grub menu / entries. Change the default boot timeout.Finally to boot from PXE you should have a network interface that support it and configure the server to boot from it on the BIOS.
I often see editing of grub2 parameters to change kernel parameters (i.e. loglevel, quiet, intremap, etc...), but I think there are also grub2 parameters and I'm lost to what they actually are. Specifically, does grub2 need any special parameters to handle a network boot (PXE)? In legacy-grub, I've had to add macappend but I'm not sure if this is a kernel parameter or if it something for the grub configuration. Thanks
Is there a difference between Kernel parameters and the bootloader parameters?
I’m not sure it’s documented explicitly in the kernel, but the x86-specific boot command-line parsing includes this comment:Find a non-boolean option, that is, "option=argument". In accordance with standard Linux practice, if this option is repeated, this returns the last instance on the command line.This allows users to add settings to the end of their command-line without caring about any preceding values in the command-line. Looking at the generic parsing code confirms this: parameters are read one after another, and any value set by a duplicated parameter is overwritten by the last instance. In your example, foo=16 wins. Note that tools which parse /proc/cmdline have their own behaviour and may not follow the kernel convention.
This was prompted by another question elsewhere, which after digging into briefly a quick online search ("linux kernel command line override priority" and some variations) turned up absolutely nothing. The issue is that /proc/cmdline indicates a parameter has been included twice with different values. My question is NOT about why that is or how it can happen, it's which one has precedence. In other words, given this as a commandline: foo=12 console=tty1 foo=16If foo is a setting which cannot meaningfully have two values, is there any convention for which one applies?
Do subsequent arguments override previous from linux kernel command line, or vice versa?
I/O schedulers are assigned globally at boot time. Even if you use multiple elevator=[value] assignments only the last one will take effect. To automatically/permanently set per-device schedulers you could use udev rules, systemd services or configuration & performance tuning tools like tuned. As to your other question, the answer is yes, elevator=none is the correct value to use for NVME storage.
We have systems with both spinning mechanical disks, and NVME storage. We want to reduce the CPU overhead for IO by taking any IO scheduler out of the way. We want to specify this on the Linux boot command line; i.e. in GRUB_CMDLINE_LINUX, in the file /etc/default/grub.For mechanical disks, we can append elevator=noop to the command line. This corresponds to the noop value in /sys/block/sda/queue/scheduler For NVME storage, we instead use none in /sys/block/nvme0n1/queue/scheduler; which presumably (could not confirm) can be specified at boot time by appending elevator=none.This becomes a two-part question:Is elevator=none the correct value to use for NVME storage in GRUB_CMDLINE_LINUX? Can both values be specified in GRUB_CMDLINE_LINUX?If the second is correct, I'm guessing that elevator=noop will set correctly for the spinning disks, but the NVME controller will gracefully ignore it; then elevator=none will set correctly for NVME disks, but the spinning disk controller will gracefully ignore that.
How to specify multiple schedulers on the kernel boot command line?
tl;dr: Run this in dom0: qvm-prefs --set vmnamehere kernelopts 'nopat sysrq_always_enabled audit=0' In Qubes OS (4.0), if you want to add new kernel parameters for a specific VM (AppVM or TemplateVM) you can (only?) do so from dom0. First, see what kernel parameters are already added(because you need to specify them when you set the new ones), in dom0 execute: $ qvm-prefs --get dev01-w-s-f-fdr28 kernelopts nopat(dev01-w-s-f-fdr28 is the name of my VM, but don't let that confuse you) Note that nomodeset console=hvc0 rd_NO_PLYMOUTH rd.plymouth.enable=0 plymouth.enable=0 (seen in OP) are not reported. You can find them set in file /usr/share/qubes/templates/libvirt/xen.xml which is not something you're expected to ever modify: [ctor@dom0 usr]$ grep -C1 'nomodeset console=hvc0 rd_NO_PLYMOUTH rd.plymouth.enable=0 plymouth.enable=0' /usr/share/qubes/templates/libvirt/xen.xml {% if vm.kernel %} <cmdline>root=/dev/mapper/dmroot ro nomodeset console=hvc0 rd_NO_PLYMOUTH rd.plymouth.enable=0 plymouth.enable=0 {{ vm.kernelopts }}</cmdline> {% endif %}To set the new kernel parameters you have to remember to also specify the existing ones(reported by --get above ie. nopat), in dom0 execute: $ qvm-prefs --set dev01-w-s-f-fdr28 kernelopts 'nopat sysrq_always_enabled audit=0'Verify, in dom0: $ qvm-prefs --get dev01-w-s-f-fdr28 kernelopts nopat sysrq_always_enabled audit=0Restart the VM(aka qube), then verify inside the VM: [user@dev01-w-s-f-fdr28 ~]$ cat /proc/cmdline root=/dev/mapper/dmroot ro nomodeset console=hvc0 rd_NO_PLYMOUTH rd.plymouth.enable=0 plymouth.enable=0 nopat sysrq_always_enabled audit=0
How to add sysrq_always_enabled and audit=0 kernel parameters to an AppVM in QubesOS 4.0 ? Current /proc/cmdline inside the VM is: [user@dev01-w-s-f-fdr28 ~]$ cat /proc/cmdline root=/dev/mapper/dmroot ro nomodeset console=hvc0 rd_NO_PLYMOUTH rd.plymouth.enable=0 plymouth.enable=0 nopat
How to add VM kernel parameters in Qubes OS 4.0 ?
Initially, the nohz_full kernel parameter was meant to only (*) :set the specified list of CPUs whose tick will be stopped whenever possible.When isolcpus was meant to (*) :Specify one or more CPUs to isolate from disturbances specified in the flag listDisturbance depicts a much wider area than the timer tick only. As a matter of fact, if work queues are shared among all CPUs and the scheduler algorithms must consequently run on all these CPUs for loadbalance's sake… this also consists in a disturbance nohz_full won't prevent from when isolcpus will. This patch (from 2015) even acknowledged that :nohz_full is only useful with isolcpus also set, since otherwise the scheduler has to run periodically to try to determine whether to steal work from other cores.And made so that :when booting with nohz_full=xxx on the command line, we should act as if isolcpus=xxx was also set, and set (or extend) the isolcpus set to include the nohz_full cpus.Therefore… we can nowadays consider that : if the specified sets of CPUs are identical, it is no longer necessary to specify both parameters.(*) quoting The Linux kernel user’s and administrator’s guide
When isolating CPU cores for jitter-sensitive processes it is common to use both boot parameters nohz_full and isolcpus (I know the latter is deprecated in favor of cpusets, but it's still around). isolcpus also has a nohz parameter. I wonder if it has the same effect as nohz_full?
What's the difference between the kernel boot parameters nohz_full and isolcpus=nohz
fio is passing this value [depth] to the operating systemandAccordingly, fio is passing this value to the operating system. So how does FIO do this?There might be a misconception here: fio is NOT directly passing a depth parameter to the operating system. If possible, fio tries to submit I/O up to the iodepth specified using a given ioengine. If that depth is reached then fio will wait for (some) outstanding I/O to complete before it tries to submit more I/O...How does FIO benchmark set [io]depth?It depends on the ioengine, it depends on the fio parameters mentioned in What exactly is iodepth in fio? , https://serverfault.com/questions/923487/what-does-iodepth-in-fio-tests-really-mean-is-it-the-queue-depth and https://www.spinics.net/lists/fio/msg07191.html . Without small fixed examples there's too much to explain. There comes a point where there's nothing else to do other than read and understand the code of fio itself... Fio has main loop for submitting I/Os (see https://github.com/axboe/fio/blob/fio-3.8/backend.c#L1055 ): static void do_io(struct thread_data *td, uint64_t *bytes_done) { [...] while ((td->o.read_iolog_file && !flist_empty(&td->io_log_list)) || (!flist_empty(&td->trim_list)) || !io_issue_bytes_exceeded(td) || td->o.time_based) { [...] } else { ret = io_u_submit(td, io_u); if (should_check_rate(td)) td->rate_next_io_time[ddir] = usec_for_io(td, ddir); if (io_queue_event(td, io_u, &ret, ddir, &bytes_issued, 0, &comp_time)) break; /* * See if we need to complete some commands. Note that * we can get BUSY even without IO queued, if the * system is resource starved. */ reap: full = queue_full(td) || (ret == FIO_Q_BUSY && td->cur_depth); if (full || io_in_polling(td)) ret = wait_for_completions(td, &comp_time); } [...] } [...] }The ioengine's queuing routine is invoked by a call chain from io_u_submit(). Assuming the ioengine is asynchronous, it may choose to just "queue" the I/Os within fio and then at a later point submit the whole lot down in one go (usually as a result of its getevents() function being called from a wait_for_completions() call chain). However we'll leave tracing through fio's code as an exercise for the reader.if I want to write a benchmark program like fio, how would I control the IO queue depth?You would need to mimic one of fio's (asynchronous) ioengines and have an event loop that was capable of (asynchronously) submitting I/Os AND checking for their completion. Once you had such a thing the idea that you only submit up to a particular depth would be easy - if at any point you have outstanding uncompleted I/O that matches (or exceeds if you aren't checking one by one) the chosen depth then you need to wait for something to be completed before you submit more. You may find aio-stress.c in the Linux Test Project easier to understand/modify than fio if you're making a toy benchmark.
NOTE: My question stems from this other U&L Q - What exactly is iodepth in fio?I want to know how internally FIO sets I/O depth. I.e., one of the parameters we submit to FIO when we run it is "IOdepth" (--iodepth=). How does FIO internally control this parameter with the underlying operating system? Here is an example of the command that we use to run FIO benchmark: $ sudo fio --filename=/dev/nvme0n1 --direct=1 --rw=randwrite --refill_buffers \ --norandommap --randrepeat=0 --ioengine=libaio --bs=8K --iodepth=72 --numjobs=256 \ --time_based --runtime=600 --allow_mounted_write=1 --group_reporting --name=benchtest benchtest: (g=0): rw=randwrite, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=72 As in this example, the value of "iodepth" can be changed. Accordingly, fio is passing this value to the operating system. So how does FIO do this? If you want an actual problem to be solved: if I want to write a benchmark program like fio, how would I control the IO queue depth?
How does FIO benchmark set IOdepth?
After a day with this, I found a few workarounds, with varying results.Swapping out acpi=off for nolapic allows the system to boot and power off, until I noticed that I was operating on only one core. Disabling "MPS Table Mode" in the BIOS[*] has the same outcome as option one, but with less kernel chatter. I tried the Debian Jessie (older) install disc, went into recovery mode to fire up a shell, and no kernel parameters were needed at all. /proc/cpuinfo showed both cores and poweroff worked. Apparently the core issue has been fixed in the 4.13 kernel. I did not get an opportunity to try this yet, but this may be my preferred option.[*] http://forums.debian.net/viewtopic.php?f=10&t=134408
Heavily related to a question on the sister site (https://serverfault.com/questions/874943/debian-9-black-screen-during-install), I have recently picked up an HP ProLiant DL380 G5 and attempted to install Debian 9 on it (currently to a USB drive as I wait for my order of HDD's). The OS setup would halt due to a NMI Watchdog error (to the swapper task just like above linked) if I did not add acpi=off and vga=ask to the kernel boot parameters, but now that the OS is present I want to be able to dismiss acpi=off so I can power the system off unattended (server is to be set up on an as-needed basis with WOL, physically placed in a basement). With acpi=off the soft power signal does not respond. Are there any other kernel options I can use to limit ACPI (so the system can boot) but allow the system to turn itself off as well?
ACPI kernel parameter options for HP ProLiant DL380 G5
Like the message says, use coherent_pool=<size> kernel (boot) parameter. With grub, select the desired kernel, press e to modify the boot entry, then append the line starting with kernel with the option. This change won't be preserved across reboots. If you want the change permanent, append the option to GRUB_CMDLINE_LINUX in /etc/default/grub: GRUB_CMDLINE_LINUX="... coherent_pool=<size>"Remeber to run update-grub to write the new configuration files.
I recently attempted to compile a device driver (Xilinx XAPP1052) on Fedora Workstation 20. It gave me the following error.ERROR: 256 KiB atomic DMA coherent pool is too small! Please increase it with coherent_pool= kernel parameterHow would I go about doing this? Is there a command that lets me change the coherent_pool parameter?
How to increase atomic DMA coherent pool?
I think your surmise is correct, they are inherited from the parent namespace. This seems similar to how processes clone themselves using the fork() system call, then any desired changes have to be applied by the clone, using the normal system calls. (Including replacing the current program with a completely different one, using exec(). fork()+exec() being how e.g. the shell runs other programs, although this magic is not usually visible to the user). None of the options to the underlying unshare system call change this. So I'd say the answer to your question is no. http://man7.org/linux/man-pages/man2/unshare.2.htmlOh... that wasn't even an analogy! Look at the option flags:CLONE_NEWNET (since Linux 2.6.24) This flag has the same effect as the clone(2) CLONE_NEWNET flag. Unshare the network namespace, so that the calling process is moved into a new network namespace which is not shared with any previously existing process. Use of CLONE_NEWNET requires the CAP_SYS_ADMIN capability.clone() basically means fork().Since version 2.3.3, rather than invoking the kernel's fork() system call, the glibc fork() wrapper that is provided as part of the NPTL threading implementation invokes clone(2) with flags that provide the same effect as the traditional system call. (A call to fork() is equivalent to a call to clone(2) specifying flags as just SIGCHLD.)
What are the default kernel parameters, when creating a new network namespace? Is there a way to override them upon creation? I think they are inherited by the parent process. An example using unshare: > /sbin/sysctl -a --pattern 'net.ipv4.conf.all.forwarding' net.ipv4.conf.all.forwarding = 1 > unshare -n > /sbin/sysctl -a --pattern 'net.ipv4.conf.all.forwarding' net.ipv4.conf.all.forwarding = 1
Default kernel parameters on new network namespaces
As @derobert pointed out, you have to build the kernel with the F2FS module. In my case it wasn't even included as a loadable module. To build the kernel yourself, grab it from kernel.org. Get the default kernel config for your platform. (I got mine from here for the TI-Nspire calculator series.) Modify it to include F2FS by setting CONFIG_F2FS_FS to y. Save it as .config on the root of the downloaded kernel source, and simply build it using make. You'll then find your fresh kernel stuff in arch/arm/boot.
I have my Linux root on an F2FS USB flash drive. The kernel is on another device accessible by the bootloader. I'm trying to start it with the parameters root=/dev/sda1 rootwait rootfstype=f2fs, but I always end up with a kernel panic: VFS: Cannot open root device "sda1" or unknown-block(8,1): error -19 Please append a correct "root=" boot option; here are the available partitions: 0100 8192 ram0 (driver?) 0101 8192 ram1 (driver?) 0800 3913728 sda driver: sd 0801 3913728 sda1 973c7215-01 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1)sda1 is the correct device, and I'm able to mount it with no problems to another computer running Arch Linux. I partitioned it using fdisk and formatted it using mkfs.f2fs from f2fs-tools. Why does the kernel panic? Am I missing the F2FS module? If so, how can I load it at boot time?
Linux root on F2FS
I have an older laptop with low processing power as well, I use it on the go as it's very light and the battery lasts quite long. It wasn't unstable, though, but got unresponsive whenever I did an update in the background. I run Arch on it. I moved from the vanilla kernel over to the cfs-zen tweaks, which made a difference in responsiveness. I wanted even more and installed the zen kernel, I can't really tell the difference to the cfs-patches allone, but in general it is responding really well even in stressfull situations. Just give it a go and test it out, on most distributions it's quite easy to switch back and forth between the zen kernel and the vanilla kernel.
I'm using Arch on a very old computer with Chrome and it crashes pretty often, plus the CPU consumption is very high. I read that using cfs-zen-tweaks could improve the responsiveness. Which is better, using cfs-zen-tweaks or a linux-zen kernel? What is the difference?
linux-zen VS cfs-zen-tweaks
Maybe this has led to a mix-up in your memory: In /proc/cmdline (i.e., command-line arguments for the kernel itself), arguments are separated by 0x20. In /proc/some_process_id/cmdline (i.e., command-line arguments for individual user processes), arguments are separated by 0x00.
I thought it was NUL. But today when I wrote a script, I found that it was space. Is it configurable? Or just my memory wrong?
Is Linux kernel parameters separated by space (0x20) or NUL (0x0)?
A generic way to try to fix things is to increase the mac80211 kernel module parameters having to do with disconnections. From modinfo -p mac80211:max_nullfunc_tries:Maximum nullfunc tx tries before disconnecting (reason 4). (int) max_probe_tries:Maximum probe tries before disconnecting (reason 4). (int) beacon_loss_count:Number of beacon intervals before we decide beacon was lost. (int) probe_wait_ms:Maximum time(ms) to wait for probe response before disconnecting (reason 4). (int)You can go to the directory /sys/module/mac80211/parameters and do cat [parameter] to see the current value of a parameter and (as root) do echo [value] > [parameter] to (non-persistently) set a parameter to a particular value. To persistently/permanently set the parameters you can create a file in /etc/modprobe.d like this: options mac80211 max_nullfunc_tries=16 options mac80211 max_probe_tries=20 options mac80211 beacon_loss_count=28
My WiFi connection is frequently dropping. Are there any system settings to help with the problem?
System setting to stop WiFi from dropping connection?
To disable retpoline, you need to disable the Spectre variant 2 mitigations using spectre_v2=off on the kernel command line. See the kernel’s list of parameters for details (that link is specifically for 4.18; for other versions, replace “v4.18” in the URL as appropriate).
I need to disable retpoline for a use case. I tried adding noretpoline to the boot parameter but it doesn't seem to work. The output after adding noretpoline param: $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2 Mitigation: Full generic retpoline, STIBP: disabled, RSB filling Kernel: 4.18.0-58.el8.x86_64
How to disable retpoline?
The man page in man7.org and in Debian has a more useful description:/proc/sys/fs/inode-max (only present until Linux 2.2) This file contains the maximum number of in-memory inodes. This value should be 3-4 times larger than the value in file-max, since stdin, stdout and network sockets also need an inode to handle them. When you regularly run out of inodes, you need to increase this value. Starting with Linux 2.4, there is no longer a static limit on the number of inodes, and this file is removed.Based on the last sentence, it's not there since it's not needed.
The following output (from a Vagrant VM running CentOS 6.6) mostly speaks for itself: [root@localhost ~]# echo 131072 > /proc/sys/fs/inode-max -bash: /proc/sys/fs/inode-max: No such file or directory [root@localhost ~]# sysctl -q -p [root@localhost ~]# echo 'fs.inode-max = 131072' >> /etc/sysctl.conf [root@localhost ~]# sysctl -q -p error: "fs.inode-max" is an unknown key [root@localhost ~]# man proc | col -b | grep -A6 '/proc/sys/fs/inode-max$' /proc/sys/fs/inode-max This file contains the maximum number of in-memory inodes. On some (2.4) systems, it may not be present. This value should be 3-4 times larger than the value in file-max, since stdin, stdout and network sockets also need an inode to handle them. When you regularly run out of inodes, you need to increase this value.[root@localhost ~]# uname -a Linux localhost.localdomain 2.6.32-504.el6.x86_64 #1 SMP Wed Oct 15 04:27:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [root@localhost ~]# How to reconcile the man page statement that implies this won't exist on 2.4 kernels, with the fact that it doesn't exist on this 2.6 kernel?
Why does the fs.inode-max kernel tunable not exist on version 2.6 of the Linux kernel?
On the surface, what you've suggested you've tried works for me. Example $ mkdir -p test/src test/firefox$ tree --noreport -fp . `-- [drwxrwxr-x] ./test |-- [drwxrwxr-x] ./test/firefox `-- [drwxrwxr-x] ./test/srcMake the symbolic link: $ ln -s test/src test/firefox$ tree --noreport -fp . `-- [drwxrwxr-x] ./test |-- [drwxrwxr-x] ./test/firefox | `-- [lrwxrwxrwx] ./test/firefox/src -> test/src `-- [drwxrwxr-x] ./test/srcRunning it a 2nd time would typically produce this: $ ln -s test/src test/firefox ln: failed to create symbolic link ‘test/firefox/src’: File existsSo you likely have something else going on here. I would suspect that you have a circular reference where a link is pointing back onto itself. You can use find to sleuth this out a bit: $ cd /suspected/directory $ find -L ./ -mindepth 15
I created this file structure: test/src test/firefoxWhen I run this command: ln -s test/src test/firefoxI would expect a symbolic link test/firefox/src to be created pointing to test/src, however I get this error instead: -bash: cd: src: Too many levels of symbolic linksWhat am I doing wrong? Can you not create a symbolic link to one folder which is stored in a sibling of that folder? What's the point of this?
Too many levels of symbolic links
On a Linux system, when changing the ownership of a symbolic link using chown, by default it changes the target of the symbolic link (ie, whatever the symbolic link is pointing to). If you'd like to change ownership of the link itself, you need to use the -h option to chown:-h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the ownership of a symlink)For example: $ touch test $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test $ sudo ln -s test test1 $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test $ sudo chown root:root test1 $ ls -l test* -rw-r--r-- 1 root root 0 Jul 27 08:47 test lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> testNote that the target of the link is now owned by root. $ sudo chown mj:mj test1 $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test lrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> testAnd again, the link test1 is still owned by root, even though test has changed. $ sudo chown -h mj:mj test1 $ ls -l test* -rw-r--r-- 1 mj mj 0 Jul 27 08:47 test lrwxrwxrwx 1 mj mj 4 Jul 27 08:47 test1 -> testAnd finally we change the ownership of the link using the -h option.
I am facing some issue with creating soft links. Following is the original file. $ ls -l /etc/init.d/jboss -rwxr-xr-x 1 askar admin 4972 Mar 11 2014 /etc/init.d/jbossLink creation is failing with a permission issue for the owner of the file: ln -sv jboss /etc/init.d/jboss1 ln: creating symbolic link `/etc/init.d/jboss1': Permission denied$ id uid=689(askar) gid=500(admin) groups=500(admin)So, I created the link with sudo privileges: $ sudo ln -sv jboss /etc/init.d/jboss1 `/etc/init.d/jboss1' -> `jboss'$ ls -l /etc/init.d/jboss1 lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jbossNext I tried to change the ownership of the soft link to the original user. $ sudo chown askar.admin /etc/init.d/jboss1$ ls -l /etc/init.d/jboss1 lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jbossBut the permission of the soft link is not getting changed. What am I missing here to change the permission of the link?
How to change ownership of symbolic links?
Here's what's happening. If you make a symlink with a relative path, the symlink will be relative. Symlinks just store the paths that you give them. They never resolve paths to full paths. Running $ pwd /usr/bin $ ln -s ls /usr/bin/ls2creates a symlink named ls2 in /usr/bin to ls(viz. /usr/bin/ls) relative to the directory that the symlink is in (/usr/bin). The above command would create a functional symlink from any directory. $ pwd /home/me $ ln -s ls /usr/bin/ls2If you moved the symlink to a different directory, it would cease to point to the file at /usr/bin/ls. You are making a symlink that points to Data, and naming it Data. It is pointing to itself. You have to make a symlink with the absolute path of the directory. ln -s "$(realpath Data)" ~/Data
I'm trying to create a symbolic link in my home directory that points to a directory on my external HDD. It works fine when I specify it like this: cd ~ ln -s /run/media/name/exhdd/Data/ DataHowever it creates a faulty link when I try this: cd /run/media/name/exhdd ln -s Data/ ~/DataThis creates a link that I cannot cd into. When I try, bash complains: bash: cd: Data: Too many levels of symbolic linksThe Data symbolic link in my home is also colored in red when ls is set to display colored output. Why is this happening? How can I create a link in that manner? (I want to create a symlink to a directory in my working directory in another directory.)Edit: according to this StackOverflow answer, if the second argument (in my case that'd be ~/Data) already exists and is a directory, ln will create a symlink to the target inside that directory. However, I'm experiencing the same issue with: ln -s Data/ ~/
Create a symbolic link relative to the current directory
I use the following: ln has a one-argument form (2nd form listed in the manpage) in which only the target is required (because how could ln work at all without knowing the target) and ln creates the link in the current directory. The two-argument form is an addition to the one-argument form, thus the target is always the first argument.
I have used ln to write symbolic links for years but I still get the order of parameters the wrong away around. This usually has me writing: ln -s a band then looking at the output to remind myself. I always imagine to be a -> b as I read it when it's actually the opposite b -> a. This feels counter-intuitive so I find that I'm always second-guessing myself. Does anyone have any tips to help me remember the correct order?
Tips for remembering the order of parameters for ln?
Many programs make use of this technique where there is a single executable that changes its behavior based on how it was executed. There's typically a structure inside the program called a case/switch statement that determines the name the executable was called with and then will call the appropriate functionality for that executable name. That name is usually the first argument the program receives. For example, in C when you write: int main(int argc, char** argv)argv[0] contains the name of the called executable. At least, this is the standard behaviour for all shells, and all executables that use arguments should be aware of it. Example in Perl Here's a contrived example I put together in Perl which shows the technique as well. Here's the actual script, call it mycmd.pl: #!/usr/bin/perluse feature ':5.10';(my $arg = $0) =~ s#./##;my $msg = "I was called as: ";given ($arg) { $msg .= $arg when 'ls'; $msg .= $arg when 'find'; $msg .= $arg when 'pwd'; default { $msg = "Error: I don't know who I am 8-)"; } }say $msg; exit 0;Here's the file system setup: $ ls -l total 4 lrwxrwxrwx 1 saml saml 8 May 24 20:49 find -> mycmd.pl lrwxrwxrwx 1 saml saml 8 May 24 20:34 ls -> mycmd.pl -rwxrwxr-x 1 saml saml 275 May 24 20:49 mycmd.pl lrwxrwxrwx 1 saml saml 8 May 24 20:49 pwd -> mycmd.plNow when I run my commands: $ ./find I was called as: find$ ./ls I was called as: ls$ ./pwd I was called as: pwd$ ./mycmd.pl Error: I don't know who I am 8-)
In Arch Linux, if I do ls -l in /sbin, I can see that reboot, shutdown and poweroff are all symlinks to /usr/bin/systemctl. But issuing reboot, shutdown and systemctl commands obviously does not all have the same behaviour. Is ls -l not showing me full information regarding symlinks? How can I, for example, know what the real symlink of reboot is?
Why are reboot, shutdown and poweroff symlinks to systemctl?
First of all, to find what a command's options do, you can use man command. So, if you run man ln, you will see: -f, --force remove existing destination files -s, --symbolic make symbolic links instead of hard linksNow, the -s, as you said, is to make the link symbolic as opposed to hard. The -f, however, is not to remove the link. It is to overwrite the destination file if one exists. To illustrate: $ ls -l total 0 -rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 bar -rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 foo$ ln -s foo bar ## fails because the target exists ln: failed to create symbolic link ‘bar’: File exists$ ln -sf foo bar ## Works because bar is removed and replaced with the link $ ls -l total 0 lrwxrwxrwx 1 terdon terdon 3 Mar 26 13:19 bar -> foo -rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 foo
I have 2 questions. The first one is for the -sf options and the second one is for the more specific usage of -f options. By googling, I figured out the description of command ln, option -s and -f. (copy from http://linux.about.com/od/commands/l/blcmdl1_ln.htm) -s, --symbolic : make symbolic links instead of hard links -f, --force : remove existing destination filesI understand these options individually. But, how could one use this -s and -f options simultaneously? -s is used for creating a link file and -f is used for removing a link file. Why use this merged option? To know more about ln command, I made some examples. $ touch foo # create sample file $ ln -s foo bar # make link to file $ vim bar # check how link file works: foo file opened $ ln -f bar # remove link file Everything works fine before next command $ ln -s foo foobar $ ln -f foo # remove original fileBy the description of -f option, this last command should not work, but it does! foo is removed. Why is this happening?
Why use 'ln -sf' in Linux?
The easiest way to link to the current directory as an absolute path, without typing the whole path string would be ln -s "$(pwd)/foo" ~/bin/foo_linkThe target (first) argument for the ln -s command works relative to the symbolic link's location, not your current directory. It helps to know that, essentially, the created symlink (the second argument) simply holds the text you provide for the first argument. Therefore, if you do the following: cd some_directory ln -s foo foo_linkand then move that link around mv foo_link ../some_other_directory ls -l ../some_other_directoryyou will see that foo_link tries to point to foo in the directory it is residing in. This also works with symbolic links pointing to relative paths. If you do the following: ln -s ../foo yet_another_linkand then move yet_another_link to another directory and check where it points to, you'll see that it always points to ../foo. This is the intended behaviour, since many times symbolic links might be part of a directory structure that can reside in various absolute paths. In your case, when you create the link by typing ln -s foo ~/bin/foo_linkfoo_link just holds a link to foo, relative to its location. Putting $(pwd) in front of the target argument's name simply adds the current working directory's absolute path, so that the link is created with an absolute target.
I'm trying to create a bunch of symbolic links, but I can't figure out why this is working ln -s /Users/niels/something/foo ~/bin/foo_linkwhile this cd /Users/niels/something ln -s foo ~/bin/foo_linkis not. I believe it has something to do with foo_link linking to foo in /Users/niels/bin instead of /Users/niels/something So the question is, how do I create a symbolic link that points to an absolute path, without actually typing it? For reference, I am using Mac OS X 10.9 and Zsh.
ln -s with a path relative to pwd
You forgot the initial slash before bin/python. This means /usr/bin/prj-python now points to /usr/bin/bin/python. What would you like it to point to exactly?
I created a symbolic link (yesterday) like this: sudo ln -s bin/python /usr/bin/prj-pythonWhen I run: prj-python file.pyI get: prj-python: command not foundWhen I try creating the link again, I get:ln: creating symbolic link `/usr/bin/prj-python': File existsWhy is that happening? My $PATH is:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/real/RealPlayer
Executable symbolic link results in "command not found"
It's not a bug. The use case is for when you want to link a file to the same basename but in a different directory: cd /tmp ln -s /etc/passwd ls -l passwd lrwxrwxrwx 1 xxx xxx 11 Jul 29 09:10 passwd -> /etc/passwdIt's true that when you do this with a filename that is in the same directory it creates a link to itself which does not do a whole lot of good! This works regardless of whether you use symlinks or hard links.
> cd /tmp > ln -s foo > ls -alhF /tmp lrwxrwxrwx 1 user user 3 Jul 29 14:00 foo -> fooIs this a bug in ln or is there a use case for symlinking a file to itself? This is with coreutils 8.21-1ubuntu5.1.
Why does ln -s accept a single argument
You can use the -f, --force option of ln to have it remove the existing symlink before creating the new one. If the destination is a directory, you need to add the -n, --no-dereference option to tell ln to treat the symlink as a normal file. ln -sfn target existing_linkHowever, this operation is not atomic, as ln will unlink() the old symlink before calling symlink(), so technically it doesn't count as changing the value of the link. If you care about this distinction, then the answer is no, you can't change the value of an existing symlink. That said, you can do something like the following to create a new symlink, changing part of the old link value: ln -sfn "$(readlink existing_link | sed s/foo/bar/)" "existing_symlink"
Is there a way to replace the value of a symbolic link? For example, I want to change a symbolic link from this: first -> /home/username/foo/very/long/directories/that/I/do/not/want/to/type/againto this: second -> /home/username/bar/very/long/directories/that/I/do/not/want/to/type/againI want to change only foo to bar. Of course I can create a link again, but if it is possible to replace the value of the link, it becomes easier.
Is there way to replace value of symbolic link? [duplicate]
mv moves a file, and ln -s creates a symbolic link, so the basic task is accomplished by a script that executes these two commands: #!/bin/sh mv -- "$1" "$2" ln -s -- "$2" "$1"There are a few caveats. If the second argument is a directory, then mv would move the file into that directory, but ln -s would create a link to the directory rather than to the moved file. #!/bin/sh set -e original="$1" target="$2" if [ -d "$target" ]; then target="$target/${original##*/}" fi mv -- "$original" "$target" ln -s -- "$target" "$original"Another caveat is that the first argument to ln -s is the exact text of the symbolic link. It's relative to the location of the target, not to the directory where the command is executed. If the original location is not in the current directory and the target is not expressed by an absolute path, the link will be incorrect. In this case, the path needs to be rewritten. In this case, I'll create an absolute link (a relative link would be preferable, but it's harder to get right). This script assumes that you don't have file names that end in a newline character. #!/bin/sh set -e original="$1" target="$2" if [ -d "$target" ]; then target="$target/${original##*/}" fi mv -- "$original" "$target" case "$original" in */*) case "$target" in /*) :;; *) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}" esac esac ln -s -- "$target" "$original"If you have multiple files, process them in a loop. #!/bin/sh while [ $# -gt 1 ]; do eval "target=\${$#}" original="$1" if [ -d "$target" ]; then target="$target/${original##*/}" fi mv -- "$original" "$target" case "$original" in */*) case "$target" in /*) :;; *) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}" esac esac ln -s -- "$target" "$original" shift done
Can someone give me a command that would:move a file towards a new directory and leave a symlink in its old location towards its new one
Move a file and replace it by a symlink
In GNU's ln, there is ln -n, which would allow re-pointing a symlink: $ mkdir dir1 dir2 $ ln -s dir1 sym # dir1/ # dir2/ # sym -> dir1/$ ln -nsf dir2 sym # dir1/ # dir2/ # sym -> dir2/BSD coreutils uses the flag -h the same way -n would be used, but it is likely the binary may support -n just for compatibility with GNU. Repointing it to the same location also requires the -f flag.
I have a particular directory full of other directories organized (named) by date. For ease of reference, I have a symlink called current pointing to the latest one. In the script that creates new date directories, I wish to create or fix the current symlink to point to the newest directory once created. I thought the appropriate command would just be, e.g., ln -fs 2017-03-01 currentIf the current symlink doesn't exist yet, this works.However, if the current symlink has already been created (and points, let us say, at the directory 2017-02-28), this doesn't work: Instead of removing the symlink current and creating a new symlink current which points to 2017-03-01, the result will instead be a broken symlink called 2017-03-01 pointing to itself, resting inside the directory 2017-02-28 (which is where the symlink current pointed and still points). This baffled me, so I read the specs for ln. Turns out this is expected behavior:SYNOPSIS ln [-fs] [-L|-P] source_file target_fileln [-fs] [-L|-P] source_file... target_dirDESCRIPTION ... The second synopsis form shall be assumed when the final operand names an existing directory.It seems, then, that there is no way whatsoever to repoint a symlink that currently points to a directory to a new target, where the new target has a name different from the name of the symlink. So ln -fs doesn't work the way I thought it did. Must I rm current, or is there another approach I've overlooked?
How to force creation of a symbolic link?
link used solely for hard links, calls the link() system function and doesn't perform error checking when attempting to create the link ln has error checking and can create hard and soft links
From the man pages: ln - make links between filesand link - call the link function to create a link to a fileThese seem to do the same thing however ln takes a lot of options as well. Is link just a very basic ln? Is there any reason to use link over ln?
What is the difference between the link and ln commands?
The results of both has to be the same, in that a hard link is created to the original file. The difference is in the intended usage and therefore the options available to each command. For example, cp can use recursion whereas ln cannot: cp -lr <src> <target>will create hard links in <target> to all files in <src>. (it creates new directories; not links) The result will be that the directory tree structure under <target> will look identical to the one under <src>. It will differ from cp -r <src> <target> in that using the latter will copy each file and folder and give each a new inode whereas the former just uses hard links on files and therefore simply increases their Links count. When used to copy a single file, as in your example, then the results will be the identical.
I am implementing a backup scheme using rsync and hardlinks. I know I can use link-dest with rsync to do the hardlinks, but I saw mention of using "cp -l" before "link-dest" was implemented in rsync. Another method of hardlinking I know of is "ln". So my question is, out of curiosity: is there a difference in making hardlinks using "cp -l" as compared to using "ln"?
Is there a difference between hardlinking with cp -l or ln?
You already have a directory at ~/.pm2/logs. Since that directory exists, the symbolic link is put inside it. Would you want that ~/.pm2/logs is a symbolic link rather than a directory, then you will have to remove or rename that existing directory first.
I want to create a symlink ~/.pm2/logs -> /opt/myapp/logWhen I run ln -sFf /opt/myapp/log ~/.pm2/logsI get a symlink ~/.pm2/logs/log -> /opt/myapp/logwhich is not what I want. I'd prefer a POSIX-compatible solution if possible.
How to create a folder symlink that has a different name?
But something about the syntax is perplexing and counter to what I would expect.The arguments for ln, in the form that you're using it, are:ln [OPTION]... [-T] TARGET LINK_NAME (1st form)The perplexing, unintuitive thing is that when you're creating a symlink, the target argument for ln isn't expected to be a path to a file, but rather the contents of the symlink to be created. If you think about it for a moment, it's obvious that it has to be that way. Consider: $ echo foo >foo $ ln -s foo bar1 $ ln -s $PWD/foo bar2 $ cat bar1 foo $ cat bar2 foo $ ls -l bar1 bar2 lrwxrwxrwx 1 matt matt 3 Dec 29 16:29 bar1 -> foo lrwxrwxrwx 1 matt matt 29 Dec 29 16:29 bar2 -> /home/matt/testdir/fooIn that example I create 2 symlinks, named "bar1" and "bar2", that point to the same file. ls shows that the symlinks themselves have different contents, though - one contains an absolute path, and one contains a relative path. Because of this, one would continue working even if it were moved to another directory, and the other wouldn't: $ mv bar2 /tmp $ cat /tmp/bar2 foo $ mv bar1 /tmp $ cat /tmp/bar1 cat: /tmp/bar1: No such file or directorySo, considering that we must be able to make both relative and absolute symlinks, and even to create broken symlinks that will become un-broken if the target file is later created, the target argument has to be interpreted as freeform text, rather than the path to an already-existing file. If you want to create a file named deploy/resources.php that links to deploy/resources.build.php, you need to decide if you want to create an absolute symlink (which is resilient against the symlink being moved, but breaks if the target is moved), or a relative symlink (which will keep working as long as both the symlink and the target are moved together and maintain the same relative paths). To create an absolute symlink, you could do: $ ln -s $PWD/deploy/resources.build.php deploy/resources.phpTo create a relative one, you would first figure out the relative path from the source to the target. In this case, since the source and target are in the same directory relative to one another, you can just do: $ ln -s resources.build.php deploy/resources.phpIf they weren't in the same directory, you would need to instead do something like: $ ln -s ../foo/f bar/bIn that case, even though foo and bar are both in your current directory, you need to include a ../ into the ln target because it describes how to find f from the directory containing b. That's an extremely long explanation, but hopefully it helps you to understand the ln syntax a little better.
It seems like it should be simple to symlink one file to a new file in a subdirectory.... ....without moving subdirectories. But something about the syntax is perplexing and counter to what I would expect. Here's a test case: mkdir temp cd temp mkdir deploy echo "Contents of the build file!" > deploy/resources.build.php ln -s deploy/resources.build.php deploy/resources.php cat deploy/resources.php #bad symlinkThis just creates a broken symlink! I am running this in a build environment setup script, so I want to avoid changing the current working directory if at all possible. ln -s deploy/resources.build.php resources.php cat deploy/resources.phpAlso doesn't work because it creates the symlink in the temp directory instead of the deploy subdirectory. cd deploy ln -s resources.build.php resources.php cd ..This works, but I'd prefer to know how to do it without changing directories. Using a full path like: /home/whatever/src/project/temp/stuff/temp/deploy/resources.build.phpWorks, but is unweildy and somewhat impractical, especially in a build environment where all the project stuff might be different between builds, and the like. How can I create a symlink between two files in a subdirectory, without moving into that subdirectory and out of it, and while giving the new file "alias" a new name?
Symlink aliasing files in subdirectories without changing current directory
You can't without writing a bit of code. Those symlink shortcuts work because vim is written that way. It looks at how (with what name) it was started and acts as if it had been called with the appropriate command line options. This behavior is hardcoded in the executable, it is not a trick done by the symbolic link. So if you want to do that yourself, the easiest is to write a small wrapper script that execs vim with the options you want: #!/bin/sh exec vim <options you want> "$@"The "$@" at the end simply passes any command line options given to the script along to vim.
After I make&make install vim from source, I found many symbolic links of vim in /usr/local/bin, such as evim, rvim, view... The vim(1) man page said that "rvim" is equivalent to "vim -Z" and so on. Now I wonder: can I make such a symbolic link with ln(1) myself, and if so, how?
How to make a symbolic link to /usr/bin/vim but with start-up parameters?
But I could only create a hard link in the /dev directory and it was not possible in other directories.As shown by the error message, it is not possible to create a hard link across different filesystems; you can create only soft (symbolic) links. For instance, if your /home is in a different partition than your root partition, you won't be able to hard link /tmp/foo to /home/user/. Now, as @RichardNeumann pointed out, /dev is usually mounted as a devtmpfs filesystem. See this example: [dr01@centos7 ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos_centos7-root 46110724 3792836 42317888 9% / devtmpfs 4063180 0 4063180 0% /dev tmpfs 4078924 0 4078924 0% /dev/shm tmpfs 4078924 9148 4069776 1% /run tmpfs 4078924 0 4078924 0% /sys/fs/cgroup /dev/sda1 1038336 202684 835652 20% /boot tmpfs 815788 28 815760 1% /run/user/1000Therefore you can only create hard links to files in /dev within /dev.
When I wanted to create a hard link in my /home directory in root mode, Linux showed the following error message: ln: failed to create hard link ‘my_sdb’ => ‘/dev/sda1’: Invalid cross-device linkThe above error message is shown below: # cd /home/user/ # ln /dev/sda1 my_sdbBut I could only create a hard link in the /dev directory, and it was not possible in other directories. Now, I want to know how to create a hard link from an existing device file (like sdb1) in /home directory (or other directories) ?
Why I can't create a hard link from device file in other than /dev directory?
You won't need a convoluted bash script, but a simple one-liner. mkdir --parents will take care of everything, nicely not even printing an error if the directory structure already exists. Just be careful with how you treat these directories on removal, so you don't break other packages. Also, since you're writting it in bash, you can take a look at sorcery (shameless plug). Maybe it would be simpler to just modify that, as it is mature and flexible.
So I'm writing a small package manager, and a problem I've run into is making the symbolic links to files. It installs the package to /usr/pkg/name-version, and then reads a file to determine what symbolic links to make. I'm using ln to make the links, and I've run into a problem when trying to install the Linux API headers. I need to link the header files themselves, not the folders that contain them (so if 2 packages need to put files in the same subdirectory of include they can without screwing one package up). That problem I solved, but ln simply errors out if the path is incomplete, which is annoying because those directories shouldn't exist until the package is installed. Is there a flag for ln that will create any directories that are missing, or am I going to have to go with some convoluted bash script?
Use `ln` to create a missing directory
You can't: A symlink is simply an extra inode (a structure that points to the file) and this inode consists of, amongst other things, a deviceId and an inode pointer. The deviceId effectively points to a device special file within the /dev directory and the inode pointer points to a block on that device. Your network location of 10.0.1.103 does not and cannot have an deviceId (it's not in /dev) therefore you can't possibly have a symlink to a network location. On the other hand, a mounted network share will have a deviceId which is why you can create a symlink to a mounted location.
I am trying to create a symlink of a file on one linux workstation to another linux workstation, without having to 'mount' any network shares. Here's what I am trying to do, but can't get it to work. ln -s /link/to/local/file.mov //10.0.1.103/sharedFolder/symlinkFile.mov
Symlink from one workstation to another without mount
You might want to do that: for dir in dir1 dir2 do [[ ! -d /somedir/$dir ]] && mkdir /somedir/$dir find /media/sd*/$dir -type f -exec bash -c \ '[[ ! -f /somedir/'$dir'/$(basename $1) ]] && ln -s $1 /somedir/'$dir'/' foo {} \; doneThis create symbolic links in /somedir/dir1/ (resp. dir2) pointing to all files present under /media/sd*/dir1 (resp. dir2). This script doesn't preserve hierarchy that might be present under the source directories. Edit: Should you want all the links to be placed in a single directory, here is a slightly modified version: [[ ! -d /somedir/data ]] && mkdir /somedir/data find /media/sd*/dir[12] -type f -exec bash -c \ '[[ ! -f /somedir/data/$(basename $1) ]] && ln -s $1 /somedir/data/' foo {} \; done
I have multiple hard drives with the same directory hierarchy, for example: /media/sda/dir1 /media/sda/dir2 ... /media/sdb/dir1 /media/sdb/dir2Two hard drives with similar names and similar directory names. I want to create separate symbolic links to dir1 and dir2 on every hard drive. The easiest way I have found is to use cp -sR: cp -sR /media/sd*/dir1 /somedir/dir1 cp -sR /media/sd*/dir2 /somedir/dir2However, this creates new directories in /somedir which has various side effects, for example, the directory timestamps are useless. How can I create symbolic links named dir1 and dir2 which link to /media/sd*/dir1 and /media/sd*/dir2? Files are regularly added to the hard drives so I would need to run these commands on a regular basis.
Create symbolic links with wildcards
There's a disappointing lack of comments in the code. It's as if no-one ever thought it useful, since the time bind mounts were implemented in v2.4. Surely all you'd need to do is substitute .mnt->mnt_sb where it says .mnt...Because it gives you a security boundary around a subtree.PS: that had been discussed quite a few times, but to avoid searches: consider e.g. mount --bind /tmp /tmp; now you've got a situation when users can't create links to elsewhere no root fs, even though they have /tmp writable to them. Similar technics works for other isolation needs - basically, you can confine rename/link to given subtree. IOW, it's a deliberate feature. Note that you can bind a bunch of trees into chroot and get predictable restrictions regardless of how the stuff might get rearranged a year later in the main tree, etc.-- Al Viro There's a concrete example further down the threadWhenever we get mount -r --bind working properly (which I use to place copies of necessary shared libraries inside chroot jails while allowing page cache sharing), this feature would break security. mkdir /usr/lib/libs.jail for i in $LIST_OF_LIBRARIES; do ln /usr/lib/$i /usr/lib/libs.jail/$i done mount -r /usr/lib/libs.jail /jail/lib chown prisoner /usr/log/jail mount /usr/log/jail /jail/usr/log chrootuid /jail prisoner /bin/untrusted &Although protections should be enough, but I'd rather avoid having the prisoner link /jail/lib/libfoo.so (write returns EROFS) to /jail/usr/log where it's potentially writeable.
Original Problem I have a file on one filesystem: /data/src/file and I want to hard link it to: /home/user/proj/src/file but /home is on one disk, and /data is on another so I get an error: $ cd /home/user/proj/src $ ln /data/src/file . ln: failed to create hard link './file' => '/data/src/file': Invalid cross-device linkOkay, so I learned I can't hard link across devices. Makes sense. Problem at hand So I thought I'd get fancy and bind mount a src folder that's on /data's file system: $ mkdir -p /data/other/src $ cd /home/user/proj $ sudo mount --bind /data/other/src src/ $ cd src $ # (now we're technically on `/data`'s file system, right?) $ ln /data/src/file . ln: failed to create hard link './file' => '/data/src/file': Invalid cross-device linkWhy does this still not work? Workaround I know I have this setup right because I can make the hard link as long as I'm in the "real" /data directory instead of the bound one. $ cd /data/other/src $ ln /data/src/file . $ # OK $ cd /home/user/proj/src $ ls -lh total 35M -rw------- 2 user user 35M Jul 17 22:22 file$Some System Info $ uname -a Linux <host> 4.10.0-24-generic #28-Ubuntu SMP Wed Jun 14 08:14:34 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux$ findmnt . . . ├─/home /dev/sdb8 ext4 rw,relatime,data=ordered │ └─/home/usr/proj/src /dev/sda2[/other/src] │ ext4 rw,relatime,data=ordered └─/data /dev/sda2 ext4 rw,relatime,data=ordered$ mountpoint -d /data 8:2$ mountpoint -d /home/usr/proj/src/ 8:2Note: I manually changed the file and directory names to make the situation more clear, so there may be a typo or two in the command readouts.
Why can't I create a `hardlink` to a file from a "mount --bind" directory on the same filesystem?
Because in the second ln it doesn't fail it creates a symlink_dir/dir_2 -> dir_2symbolic link Do a: ls -l symlink_dir/dir_2And you'll see a (probably broken) symlink there. That's how ln is meant to work if the target is a directory (or a symlink to a directory). A third ln could fail because there's already a dir_2 inside symlink_dir (aka dir_2).
When running (on linux different ubuntu variations): >ln -s dir_1 symlink_dir >ln -s dir_2 symlink_dirIt fails without telling that it fails. But if you do the same thing on a file instead or, add v to the option it does tell you that it fails: >ln -s file_1 symlinkg_file >ln -s file_2 symlinkg_fileor >ln -sv dir_1 symlink_dir >ln -sv dir_2 symlink_dirIt fails with the error msg: ln: failed to create symbolic linkFor me this seems to be a very strange behaviour? Is there a reason for this?
Why doesn't ln -s tell that it fails when creating a symlink to an existing symlinked directory?
When you write ln -s VALUE link_nameit creates a symbolic link with value VALUE. This is what you got. If you want to create a relative link, it is best to cd to the directory where you want to put the link: $ cd ~/bin $ ln -s ../programming/tmux/tmux .Shell completion will help you.
I have an issue with ln -s on Ubuntu 14.04, while using it in the following scenario: $ cd ~/programming/tmux/ $ ln -s tmux ~/bin/tmux $ ls -l ~/bin/tmux lrwxrwxrwx 1 USER USER 4 sie 31 11:02 /home/USER/bin/tmux -> tmuxWhy is it like so? When I create it giving the absolute path everything works fine: $ ln -s ~/programming/tmux/tmux ~/bin/tmux $ ls -l ~/bin/tmux lrwxrwxrwx 1 USER USER 4 sie 31 11:02 /home/USER/bin/tmux -> ~/programming/tmux/tmux
Why ln -s creates relative broken links? [duplicate]
Without -n, both your ln commands would create links inside dir2: if LINK_NAME exists and is a directory or a symlink to a directory, the link is created inside the directory (if possible). That’s what -n changes here: ln won’t consider LINK_NAME as a directory (if it’s a symlink). Since LINK_NAME already exists, ln fails. Adding -f will cause the existing symlink to be replaced: ln -nsf dir3 dir2will replace dir2 instead of creating a link inside dir2.
I'm refreshing my understanding of a few basic GNU/Linux commands that I admit that I never really understood fully for the last 20+ years. $ man ln in part says: -n, --no-dereference treat LINK_NAME as a normal file if it is a symbolic link to a directoryTo try to understand this better, I broke this down as follows: $ mkdir dir1 $ ln -vs dir1 dir2 'dir2' -> 'dir1' $ mkdir dir3; touch dir3/xx$ tree -F . ├── dir1/ ├── dir2 -> dir1/ └── dir3/ └── xx # Now to test -n, first as a hard link $ ln -vn dir3/xx dir2 ln: failed to create hard link 'dir2': File exists# and second as a symbolic link $ ln -vsn dir3/xx dir2 ln: failed to create symbolic link 'dir2': File exists# ??? why do these both fail ???Only the first command form in the SYNOPSIS calls out a 'LINK_NAME', in this syntax: ln [OPTION]... [-T] TARGET LINK_NAMESo this says that the -n and --no-dereference options ONLY relate to the first command form for ln, (and not the other three command forms). In my example: The TARGET is dir3/xx, and the LINK_NAME is dir2 ('a symbolic link to a directory'). The manual says that if LINK_NAME (i.e. remember this is the name of the link we are supposedly creating) is 'a symbolic link to a directory'... ... then we are supposed to treat this symbolic link as a 'normal file'. What am I missing?
What is `ln --no-dereference` supposed to do?
Sort of, but note that the size of a file is not well-defined at that level of precision. A symbolic link involves four parts:The name of the link, which is stored in the directory where it is an entry. Other metadata that is present for every directory entry, to locate the rest of the metadata. This is typically the location of an inode. In addition, each directory entry costs a few more bytes, for example to pad file names and to maintain a data structure such as a balanced tree or hash. Other metadata of the symbolic link itself such as timestamps. This metadata is also present for other file types (e.g. an empty regular file). The target of the link.If the filesystem allows symbolic links to have multiple hard links, the first two parts are per directory entry, the last two parts are present only once per symlink. In ext2/ext3/ext4, the target of a symbolic link is stored in the inode if it's at most 60 bytes long. You can confirm that by asking du: it reports 0 for symlinks whose target is ≤60 bytes and one block for larger targets. Just like for a regular file, the figure reported by du excludes the storage for the directory entry and the inode. If you want to know exactly how much space the symlink takes, you have to count those as well. Most classical filesystems allocate inodes at filesystem creation time, so the cost is split: the size of the directory entry counts against the number of blocks of data, the inode counts against the inode pool size. For the size of the directory entry itself, the exact number of bytes taken up by an entry can depend on what other entries are present in the directory. However a directory usually takes up a whole number of blocks, so if you create more and more entries, the size of the directory remains the same, until the entries no longer fit in one block and a second block is allocated, and so on. To see exactly how the directory entries are stored in the block, you'd need a filesystem debugger and a passable understanding of the filesystem format, or a good to excellent understanding of the filesystem format as well as the knowledge of what other entries are present in the directory and possibly the order in which they were created and other entries were removed. In summary, the “few bytes for other metadata” are:The directory entry, of a variable size. Creating the symbolic link may either make no difference or add one block. The inode.And the target may occupy anywhere from 0 to one block in addition to that.
Symbolic links do take room, of course, but just the room it takes to store the name and target plus a few bytes for other metadataDo symbolic links actually make a difference in disk usage? My question is, can we determine how many bytes a symlink is taking up? $ touch alfa.txt $ ln -s alfa.txt bravo.txtBoth du and ls report 8, which is "alfa.txt": $ du -b bravo.txt 8 bravo.txt$ ls -l bravo.txt lrwxrwxrwx 1 Steven None 8 Mar 8 18:17 bravo.txt -> alfa.txtIs some command available that can print the true size of the symlink with the "few bytes for other metadata" included?
Size of symlink
ln -f "$(readlink <symlink>)" <symlink>
Having a (single, no batch filesystem processing needed) symlink, what a command line to use to turn it into a hard link to the same file?
How to replace a symbolic link with an equivalent hard link?
When you run ln -s nonexistenttarget linkln doesn’t check whether nonexistenttarget exists, it creates the link, unless link already exists. -f works around the last part by deleting link if necessary. The impact of a non-existent target is only felt when a program tries to dereference the link, e.g. by opening it: $ ls -l link lrwxrwxrwx 1 steve steve 17 May 22 08:44 link -> nonexistenttarget$ cat link cat: link: No such file or directory
Is there a way to create a symlink whose target does not exist using shell scripts? From reading man 1 ln, I do not see an option to do so; and even -f checks if the target exists. Is there a way to achieve what I'm looking for?
Is there a way to create a symlink to a non-existent target?
Don't do this. If you want to have a backup system using hard links to save space, better to use rsync with --link-dest, which will hard link files appropriately to save space, without causing the problems that this causes (that is, hard linking between directories is a corruption of the filesystem, and will cause it to report wrong inode counts + fail fsck + generally have unknown semantics due to not being a DAG).
I understand the reasoning why nearly every unix version doesn't allow hard-linking of directories (in fact HFS+ on OS X is the only one I know, but even that isn't made easy to do yourself). However, all file-systems in theory support hard-linked directories, as all directories contain at least one extra hard-link to itself, plus extra hard-links in sub-directories pointing back to their parent. Now, I realise that hard-linking can be dangerous if misused, as it can create cyclical structures that few programs will check for, and thus become stuck in an infinite loop. However, I was hoping to use hard-links to create a Time Machine style backup that can work for any unix. I don't believe that this kind of structure would be dangerous, as the links simply point to previous backups; there should be no risk of cyclical linking. In my case I'm currently just using rsync to create hard-links to existing files, but this slow and wasteful, particularly with very large backups and especially if I already know which directories are unchanged. With this in mind, is there any way to force the creation of directory hard-links on unix variants? ln is presumably no good as this is the place that many unix flavours put their restricts upon in order to prevent hard-linking directories, and ln versions that support hard-linked directories specifically state that the operation is likely to fail. But for someone who knows the risks, and knows that their use-case is safe, is there any way to actually create the link anyway? Ideally for a shell script, but if I need to compile a small program to do it then I suppose I could.
Forcibly create directory hard link(s)?
The link() system call on the NFS client should map directly to the NFS LINK operation, which the server should implement using its link() system call. So as long as link() is atomic on the server, it will also be atomic on the clients.
I have a cluster with a bunch of servers with a shared disk containing a GFS global file system that all nodes access simultaneously. Each node in the cluster run the same program (a shell script is the main core). The system processes files that appear in a couple of input directories, and it works like this:the program loops through the input directories. for each file found, check existence of a "lock file", if lock file exists skip to next file. if no lock file found, create lock file. If lockfile creation failed (race lost), skip to next file if "we" own the lock, process the file and move it out of the way when it is finished.This all works very well, but I wonder if there are cheaper (less complex) solutions that would also work. I'm thinking NFS or SMB perhaps. There are two reasons for my use of GFS:each file is stored in one place only (on redundant underlying hardware of course) file locking works reliablyI create the lockfile like this: date '+%s:'${unid} > ${currlock}.${unid} ln ${currlock}.${unid} ${currlock} lockrc=$? rm -f ${currlock}.${unid}where $unid is a unique session identifier and $currlock is /gfs/tmp/lock.${file_to_process} The beauty of ln is that it is atomic, so it fails for all but one that attempts the same thing at the same time. So, I guess what I'm asking is: will NFS fill my needs? Does ln work reliably in the same way on NFS as on GFS?
Is `ln` atomic and reliable on NFS? Could NFS replace GFS in this use case?
-L only works with hard links; as specified in POSIX:If the -s option is specified, the -L and -P options shall be silently ignored.If you have readlink you can use that: ln -s -- "$(readlink symlink1)" symlink4If your readlink supports the -f option, you can use that to fully canonicalise the target (i.e. resolve all symlinks in the target’s path, if the target symlink includes other symlinks).
Let's suppose I have one file and one directory: $ ls -l total 4 drwxrwxr-x. 2 user user 4096 Oct 8 09:53 dir -rw-rw-r--. 1 user user 0 Oct 8 09:53 fileI created a symlink to file called symlink1, and a symlink to dir called dirslink1: $ ls -l drwxrwxr-x. 2 user user 4096 Oct 8 09:53 dir lrwxrwxrwx. 1 user user 3 Oct 8 10:03 dirslink1 -> dir -rw-rw-r--. 5 user user 0 Oct 8 09:53 file lrwxrwxrwx. 1 user user 4 Oct 8 09:53 symlink1 -> fileNow I created symlinks to symlink1 using ln -s and ln -sL: $ ln -s symlink1 symlink2 $ ln -sL symlink1 symlink3 $ ln -s dirslink1 dirslink2 $ ln -sL dirslink1 dirslink3Now, as far as I understand, symlink3 should point to file and dirslink3 should point to dir. But when I check it, none of the symlink[23] and dirslink[23] points to the original file or dir: $ ls -l drwxrwxr-x. 2 user user 4096 Oct 8 09:53 dir lrwxrwxrwx. 1 user user 3 Oct 8 10:03 dirslink1 -> dir lrwxrwxrwx. 1 user user 9 Oct 8 10:03 dirslink2 -> dirslink1 lrwxrwxrwx. 1 user user 9 Oct 8 10:03 dirslink3 -> dirslink1 -rw-rw-r--. 5 user user 0 Oct 8 09:53 file lrwxrwxrwx. 1 user user 4 Oct 8 09:53 symlink1 -> file lrwxrwxrwx. 1 user user 8 Oct 8 09:54 symlink2 -> symlink1 lrwxrwxrwx. 1 user user 8 Oct 8 09:54 symlink3 -> symlink1The question is: Is it possible/How do I create a symlink to the original file using another symlink?
ln: create symlink using another symlink
There is no "following the link" with hardlinks - creating a hardlinks simply gives several different names to the same file (at low level, files are actually integer numbers - "inodes", and they have names just for user convenience)- there is no "original" and "copy" - they are the same. So it is completly the same which of the hardlinks you open and write to, they are all the same. So cp by defaults opens one the files and writes to it, thus changing the file (and hence all the names it has). So yes, it is expected. Now, if you (instead of rewriting) first removed one of the names (thus reducing link count) and then recreated new file with the same name as you had, you would end up with two different files. That is what cp --remove-destination would do. 1 basics are documented at link(2) pointed to by ln(1) 2 yes it is normal behaviour and not a fluke. But see above remark about cp --remove-destination 3 no, not really. Hardlinks are simply several names for same file. What you seem to want are COW (copy-on-write) links, which only exist is special filesystems 4 yes, cp --remove-destination fileB fileA
Say I have the following setup : $ cat fileA textA $ cat fileB textB $ ln fileA myLink $ cat myLink # as expected textAI do not understand the following behaviour : $ cp fileB fileA $ cat myLink # expected ? textBI would have expected this outcome if I had written ln -s fileA myLink instead, but not here. I would have expected cp in overwriting mode to do the following : Copy the content of fileB somewhere on the hard drive Link fileA to that hard drive address but instead, I infer it does the following : Follow the link fileA Copy the content of fileB at that address The same does not seem to go for mv, with whick it works as I expected. My questions : Is this explained somewhere that I have missed in man cp or man mv or man ln ? Is this behaviour just a coincidence, (say if fileB is not much greater in size than fileA), or can it be reliably used as a feature ? Does this not defeat the idea of hard links ? Is there some way to modify the line cp fileB fileA so that the next cat myLink still shows textA ?
cp overwriting without overwriting hardlinks to destination
I created a script that will do this. The script converts all hard-links it finds in a source directory (first argument) that are the same as in the working directory (optional second argument) into symbolic links: https://gist.github.com/rubo77/7a9a83695a28412abbcd It has an option -n for a dry-run, that doesn't do anything but shows what would be done. Main part: $WORKING_DIR=./ #relative source directory from working directory: $SOURCE_DIR=../otherdir/with/hard-links/with-the-same-inodes# find all files in WORKING_DIR cd "$WORKING_DIR" find "." -type f -links +1 -printf "%i %p\n" | \ while read working_inode working_on do find "$SOURCE_DIR" -type f -links +1 -printf "%i %p\n" | sort -nk1 | \ while read inode file do if [[ $inode == $working_inode ]]; then ln -vsf "$file" "$working_on" fi done doneThe -links +1 --> Will find all files that have MORE than 1 link. Hardlinked files have a link count of at least two.
It is easy to convert a symlink into a hardlink with ln -f (example) It would also be easy to convert a hardlink (filenames link and original) back to a symbolic link to link->original in the case where you know both files and define yourself which one is the "original file". You could easily create a simple script convert-known-hardlink-to-symlink that would result in something like: convert-known-hardlink-to-symlink link original $ ls -li 3802465 lrwxrwxrwx 1 14 Dec 6 09:52 link -> original 3802269 -rw-rw-r-- 1 0 Dec 6 09:52 originalBut it would be really useful if you had a script where you could define a working directory (default ./) and a search-directory where to search (default /) for files with the same inode and then convert all those hard-links to symbolic-links. The result would be that in the defined working directory all files that are hard-links are replaced with symbolic-links to the first found file with the same inode instead.A start would be find . -type f -links +1 -printf "%i: %p (%n)\n"
Convert a hardlink into a symbolic link
It appears as if the GNU ln implementation on Linux uses the stat() function to determine whether the target exists or not. This function is required to resolve symbolic links, so when the target of the pre-existing link is not accessible, the function returns EACCESS ("permission denied") and the utility fails. This has been verified with strace to be true on a Ubuntu Linux system. To make the GNU ln use lstat() instead, which does not resolve symbolic links, you should call it with its (non-standard) -n option (GNU additionally uses --no-dereference as an alias for -n). ln -s -n -f ../../../raw_data/CHIP_TEST/BM50.2.fastq 50ATC_Rep2.fastqReading the POSIX specification for ln, I can't really make out whether GNU ln does this for some undefined or unspecified behaviour in the specification or not, but it is possible that it uses the fact that...If the destination path exists and was created by a previous step, it is unspecified whether ln shall write a diagnostic message to standard error, do nothing more with the current source_file, and go on to any remaining source_files; or will continue processing the current source_file. The "unspecified" bit here may give GNU ln the license to behave as it does, at least if we allow ourselves to interpret "a previous step" as "the destination path is a symbolic link". The GNU documentation for the -n option is mostly concerned about the case when the target is a symbolic link to a directory: '-n' '--no-dereference' Do not treat the last operand specially when it is a symbolic link to a directory. Instead, treat it as if it were a normal file. When the destination is an actual directory (not a symlink to one), there is no ambiguity. The link is created in that directory. But when the specified destination is a symlink to a directory, there are two ways to treat the user's request. 'ln' can treat the destination just as it would a normal directory and create the link in it. On the other hand, the destination can be viewed as a non-directory--as the symlink itself. In that case, 'ln' must delete or backup that symlink before creating the new link. The default is to treat a destination that is a symlink to a directory just like a directory. This option is weaker than the '--no-target-directory' ('-T') option, so it has no effect if both options are given.The default behaviour of GNU ln when the target is a symbolic link to a directory, is to put the new symbolic link inside that directory (i.e., it dereferences the link to the directory). When the target of the pre-existing link is not accessible, it chooses to emit a diagnostic message and fail (allowed by the standard text). OpenBSD ln (and presumably ln on other BSD systems), on the other hand, will behave like GNU ln when the target is a symbolic link to an accessible directory, but will unlink and recreate the symbolic link as requested if the target of the pre-existing link is not accessible. I.e., it chooses to continue with the operation (allowed by the standard text). Also, GNU ln on OpenBSD behaves like OpenBSD's native ln, which is mildly interesting. Removing the pre-existing symbolic link with rm is not an issue whatsoever, as you appear to have write and executable permissions for the directory it's located in.
On a Linux machine (a computing cluster, actually), I copied a folder from another user (who granted me permissions to do so using the appropriate chmod). This folder contains symbolic links to files I cannot access. I want to update them so that they point to copies of the same files, that I own. However, when I try to do so using ln -sf, I get Permission denied. Why is that so? That's the link: $ ls -l 50ATC_Rep2.fastq lrwxrwxrwx 1 bli cifs-BioIT 55 21 nov. 13:45 50ATC_Rep2.fastq -> /pasteur/homes/mmazzuol/Raw_data/CHIP_TEST/BM50.2.fastqI don't have permission to access its target, but I have a copy of it. That's the new target I want: $ ls -l ../../../raw_data/CHIP_TEST/BM50.2.fastq -rwxr-xr-x 1 bli cifs-BioIT 4872660831 21 nov. 14:00 ../../../raw_data/CHIP_TEST/BM50.2.fastqAnd that's what happens when I try ln -sf: $ ln -sf ../../../raw_data/CHIP_TEST/BM50.2.fastq 50ATC_Rep2.fastq ln: accessing `50ATC_Rep2.fastq': Permission deniedIt seems that the permissions of the current target is what counts, not the permissions on the link itself. I can circumvent the problem by first deleting the link, then re-creating it: $ rm 50ATC_Rep2.fastq rm: remove symbolic link `50ATC_Rep2.fastq'? y $ ln -s ../../../raw_data/CHIP_TEST/BM50.2.fastq 50ATC_Rep2.fastq $ ls -l 50ATC_Rep2.fastq lrwxrwxrwx 1 bli cifs-BioIT 40 21 nov. 18:57 50ATC_Rep2.fastq -> ../../../raw_data/CHIP_TEST/BM50.2.fastqWhy can I delete the link, but not update it?
Why permission denied upon symbolic link update to new target with permissions OK?
First take backup of your data in case something goes wrong. What I understood is that you want to delete all the contents of /all/links/ directory and the file to which the links inside this directory are pointing. There is no way to do this with a simple Unix command as you preferred, but it will work with a simple script. #!/bin/bashfor i in /all/links/* do rm "$(readlink -f $i)" rm "$i" donermdir /all/linksAs you said that all files inside /all/links/ are symbolic links so there is no need to check each value of i whether it is a link or not.
I have two directories: /all/origins/ & /all/links/. Everything in links/ points individually to something in origins/, but not everything in origins/ is linked in links/. (Squares, rectangles & categorical hierarchy, get it?) These are symlinks (ln -s), not hard links. Everything in 1. links/ needs to be deleted, along with 2. the symlink destinations in origins/ (not knowing whether they are in origins/), and 3. the links/ directory itself. What rm Unix command should I type to remove both the entire links/ directory AND any symlinked destinations symlinked to in origins/? I'm looking for something maybe like: rm -R /all/links or rm -r --follow-all-ln /all/links. If an rm command with parameters will not do the job, please say so explicitly as that is my preference and implicit in the title (can Unix & Linux do this via rm without a do loop?—only if that won't work, then please state so and explain what do loop.)
rm to remove a dir, any symlinks, AND symlink destinations? [duplicate]
No, a symbolic link is a type of file that references the path of another file. Now, if you do: ln -s /bin/cat fooAnd invoke foo as: $./foo -A /proc/self/cmdline ./foo^@-A^@/proc/self/cmdline^@You'll notice that the first argument that cat/foo received was ./foo and not cat. So, in a way, through that symlink, we've had cat receive a different first argument. That's probably not what you had in mind for your first argument though. Using a shell script wrapper is the typical way to address it. You don't need to use bash for that though. Your system's standard sh will be more than enough for that: #! /bin/sh - exec /path/to/my/executable --extra-option "$@"Other options include defining an alias or function in your ~/.bashrc/~/.zshrc... for it
Is it possible to create a symbolic link to an executable that executes it with a certain option/argument? I know a workaround would be to create a bash script in one of the PATH directories but can I achieve it somehow with a symbolic link? EDIT: Thanks for the answers, in my case an alias wouldn't do the job because i'm looking for a way to start matlab from dmenu and at least on arch matlab is only invokable from a terminal at first. Since dmenu does not consider aliases it wouldn't work .. i should have made my problem more clear.
Symbolic link with option
short: no, you cannot do it this way long: a desktop launcher may work for you. Unix-style symbolic links have only a target directory; there is no separate property for source directory. You can read about symbolic links inWhat is a symbolic link made from? Understanding the structure of symlinks Advantage of symlinks over Windows style shortcutsAs the question points out, the source directory is the desktop directory. A comment mentions Create a symbolic link relative to the current directory, but that is not relevant to the question. The question refers to the behavior of shortcuts in Microsoft Windows. With a desktop launcher, you can imitate this behavior (referring to Desktop Entry Specification, in the section Recognized desktop entry keys):Exec Program to execute, possibly with arguments. See the Exec key for details on how this key works. The Exec key is required if DBusActivatable is not set to true. Even if DBusActivatable is true, Exec should be specified for compatibility with implementations that do not understand DBusActivatable. Path If entry is of type Application, the working directory to run the program in.Unix symbolic links are constants, while Windows shortcuts can have (like Apollo Domain during the 1980s) embedded variables. While the desktop specification goes into some detail regarding what is legal in Exec (special variables), it lacks detail on where your environment variables might be used. So implementations will differ. Fortunately, the question as posed only requires constants, and launchers are the place to look for solutions.
I created a soft link with following command: ln -s "/media/Eric/node/language/1-1.English/space@English/7-2 IELTS" ~/desktop/IELTSOn my desktop, when click to open the IELTS dir, the path is /home/eric/Desktop/IELTS. I want the path to be the original path /media/Eric/node/language/1-1.English/space@English/7-2 IELTS, is that possible?
Is it possible to create a soft link on my desktop which opens with the target path instead of the link's path?
Given an absolute path to the source directory: cp -rs $PWD/sourcedir/ targetdir/The symbolic links in targetdir will then contain absolute paths to sourcedir. Otherwise, if it just made a symbolic link, it would create something like: targetdir/filename -> sourcedir/filenameBut that isn't the correct relative path to find the original file, it should be: targetdir/filename -> ../sourcedir/filenamecp doesn't try to figure out how the source and target directories relate to each other, so that it can add the appropriate number of ../ prefixes.
as the answer to this question shows, Is there a difference between hardlinking with cp -l or ln? The purpose for creating the -l option for the cp command is to recursively hard-link (the contents of) directories. The -s option is the counterpart, creating soft links instead of hard links, but it appears that it can't be used recursively. Any attempt to do so results in the error message: cp: `source_dir/source_file': can make relative symbolic links only in current directoryPerhaps this is distro dependent. In Ubuntu 12.04, this is the result. Only if the original file and the link are in the same directory does it work. Perhaps the syntax is incorrect? cp -rs target_directory destination_directory is what I used. example: $ ls sourcedir/ -rw-rw---- 1 user group 1123 Jan 8 23:10 source_file $ cp -rs sourcedir/ targetdir/ cp: `targetdir/sourcedir/source_file': can make relative symbolic links only in current directory
what is the purpose of cp -s?