date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,478,650,619,000
I just learned about Meltdown and Spectre bugs. I read that: There are patches against Meltdown for Linux (KPTI (formerly KAISER)), Windows, and OS X. Following the link in the quote I get to an article which is too obscure for me to understand. Still, it says: The resulting patch set (still called "KAISER") is in its third revision and seems likely to find its way upstream in a relatively short period of time. Following again the link in the above quote I get into a page, updated the 10th of Novermber of 2017, where I read the following: KAISER makes it harder to defeat KASLR, but makes syscalls and interrupts slower. These patches are based on work from a team at Graz University of Technology posted here[1]. The major addition is support for Intel PCIDs which builds on top of Andy Lutomorski's PCID work merged for 4.14. PCIDs make KAISER's overhead very reasonable for a wide variety of use cases. The above page also links to the code of the fix (?), here, where I can also see kernel 4.14. From this I conclude that the fix is available only for kernels 4.14 (and above?). However, all currently supported versions of Ubuntu use a lower kernel. The latest Ubuntu (17.10) uses kernel 4.13. The latest LTS Ubuntu (16.04) uses 4.4. Does this mean that the fix for such bug is not available for Ubuntu? It seems that Ubuntu 18.04 will be based on kernel 4.15, but this is still not released. Notice also that the fix seems to refer only to Meltdown and not to Spectre. This would mean that there is currently no fix for such bug anywhere.
Updates are available now ! 2017 Nov 09: the Ubuntu Security team is notified by Intel under NDA 2018 Jan 03: issue becomes public a few days before the CRD 2018 Jan 09: Ubuntu kernel updates available (for patching Meltdown) for Ubuntu 16.04 LTS, Ubuntu 17.10, Ubuntu 14.04 LTS (HWE) and Ubuntu 14.04 LTS. 2018 Jan 10: Cloud images are available (for patching Meltdown) from http://cloud-images.ubuntu.com: <TBD>: Core image updates Source : Ubuntu Wiki & Blog post
How to fix vulnerabilities related to Spectre and Meltdown bugs in Ubuntu? [duplicate]
1,478,650,619,000
I was trying to make my keyboard more useful and used xmodmap to map mathematical and greek symbols to mod3 modifier level. Where caps lock is mapped to mod3. As I understand, the 8-entry lines in .Xmodmap work as following: [nothing], [shift], [mod3], [mod3+shift], [altgr], [altgr+shift], [altgr+mod3], [altgr+shift+mod3] I used this and it mostly worked. Except for some strange occurences. For instance: keycode 49 = backslash bar includes includedin infinity infinity EuroSign EuroSign This produces the first four entries nicely, but then it cycles around (altgr+key=backshlash, not infinity). However, if I do keycode 10 = backslash bar includes includedin infinity infinity EuroSign EuroSign it produces first four entries and infinities, but the euro sign doesn't show (I get includes and includedin instead). Notice that I just used a different key, everything else is the same! Even with only 6 entries (in case there was a parsing problem with 8), infinities are not shown on that particular key. Furthermore, I mapped greek letters to 3rd and 4th entries (mod3 and mod3+shift). The lowercase work fine, but the uppercase doesn't work for Shift+mod3+S and Shift+mod3+W and Shift+mod3+X. It's not a font problem, xev doesn't show any event when I use these combinations. The entry is keycode 39 = s S Greek_sigma Greek_SIGMA integral integral downarrow downarrow Additionally, the last two entries mostly don't work. Even worse, one line gets particularly confused. I entered keycode 51 = zcaron Zcaron zcaron Zcaron dstroke Dstroke dstroke Dstroke but when I do xmodmap -pke I get keycode 51 = zcaron Zcaron zcaron Zcaron dstroke Dstroke dstroke Dstroke zcaron Zcaron dstroke Dstroke zcaron Zcaron dstroke Dstroke Notice it duplicated all the entries. A couple of keys do that, but not all of them! So, my questions are: Am I using the 8-entry lines correctly? Why do I get different number of working keysyms on different keycodes? Is it possible that something in the window manager maps these combinations and swallows them? Why are some lines getting duplicated when they are applied? Is this an obscure xlib bug? Is the modifier functionality limited? Note that I'm in a unique situation where there simply can't be any unknown shortcuts that I don't know about because my window manager is of my own making and I control all of them. More details: the mapping of caps lock is done like this: keycode 66 = Mode_switch clear lock clear control clear mod1 clear mod2 clear mod3 clear mod4 clear mod5 add control = Control_L Control_R add mod1 = Alt_L Meta_L add mod2 = Num_Lock add mod3 = Mode_switch add mod4 = Super_L Super_R Hyper_L add mod5 = ISO_Level3_Shift The system is a freshly updated arch linux. Edit: I found out that a small part of the problem (the SWX that didn't work at all) was due to the adjacency of the keys on the keyboard (key jamming, couldn't handle 3 concurrently pressed keys in this combination). Right shift removed the problem for W and X but not for S (on this particular keyboard). The rest is still a mystery, particularly the broken 5th and 6th entry for keycode 49 (the totally useless thingy above the tab).
I'd better answer my own question for future reference. After a bit of in-depth research, I found out that xmodmap is actually deprecated and is roughly patched over the xkb keyboard model. The xkb model doesn't use a linear array of alternatives, but splits layouts into groups, with each group having a couple of characters in different shift levels. The xmodmap definitions fill the entries in a very funny order: group 1, level 1,2, group 2, level 1,2, group 1, levels 3.... The groups are meant to be like "layouts" and aren't usually accessed with modifiers but with toggling. The exception is the Mode_switch character that I used, but it only accesses group 2. That would all be fine, except keys have types. Every key is defined by the layout to be TWO_LEVEL, FOUR_LEVEL, FOUR_LEVEL_ALPHANUMERIC and so on, and each level can have different notion of which modifiers map to which levels. The behaviour I assumed (8 levels, all combinations) was actually LOCAL_EIGHT_LEVEL that wasn't used at all by the layout. So in the case of keycode 51, the default was actually TWO_LEVEL, and xmodmap filled 3 groups with the 6 keys instead of adding 6 levels to 1st group. The 3rd group wasn't reached by the Mode_switch modifier. Using another key resulted in different behaviour because pre-defined type was different. As with the repetitions in the printout by xmodmap, I'm not sure exactly what happens (I printed the xkb definitions and all was fine), but I'm not surprised that errors occur when you map from a variable-length multidimensional array to a single list of symcodes. The output doesn't reflect the actual state anyway. In conclusion, xmodmap is evil. Don't ever use it. Its behaviour is erratic and ill-defined at best. It doesn't do what it says it does. Make your own xkb maps. Reuse most of the layout by include-ing and add modifications that you need. In my case, the solution is to derive the second group from greek layout and substitute math symbols in strategic places, plus some modification in the first group. Most cheap keyboards are very limited when it comes to pressing 3 keys at a time. That resulted in erratic and hardware-dependent failures for some keys. I'll experiment around with different modifier keys (the most useless key in the world - the menu key, or similarly useless right win-key), and possibly buy a better keyboard. Combination of both problems (broken-by-design hardware + evil deceptive software) created a confusing random-looking situation that at first prevented me to see them as separate problems. Reading material: http://tronche.com/gui/x/xlib/input/keyboard-encoding.html http://en.wikipedia.org/wiki/ISO/IEC_9995 http://madduck.net/docs/extending-xkb/ [Wayback] https://www.charvolant.org/doug/xkb/html/node5.html https://wiki.archlinux.org/index.php/X_KeyBoard_extension
xmodmap problems and inconsistencies with more than 4 alternative symbols per key
1,478,650,619,000
What I'm trying to do: I'm trying to scan my File-Server for malware, and I'm using clamav/clamscan, where the man page say's it can scan files up to 4GB. This man page states: --max-filesize=#n Extract and scan at most #n kilobytes from each archive. You may pass the value in megabytes in format xM or xm, where x is a number. This option protects your system against DoS attacks (default: 25 MB, max: <4 GB) --max-scansize=#n Extract and scan at most #n kilobytes from each scanned file. You may pass the value in megabytes in format xM or xm, where x is a number. This option protects your system against DoS attacks (default: 100 MB, max: <4 GB) My system is: Newish hardware ASRock motherboard, CPU: AMD Athlon(tm) II X2 270 Processor(3400MHz) Memory: 4GB OS: Debian Wheezy all updates. Questions: What am I doing wrong here? What do those errors and warnings below mean? Is there a fix for this behavior? My case: I've been trying to scan two 3TB hard-drives with clamscan for over a week now but it always gives the same errors(except Bytecode number varies): LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: Bytcode 38 failed to run: Time limit reached LibClamAV Warning: [Bytecode JIT]: Bytecode run timed out, timeout flag set LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: Bytcode 38 failed to run: Time limit reached LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: Bytecode run timed out, timeout flag set LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: Bytcode 38 failed to run: Time limit reached after approx. 40-50 hours of scanning: (Note that in the next snippet is the actual clamscan command I'm trying to run) PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 2012 root 20 0 1903M 246M 1244 R 101. 6.6 47h27:45 clamscan -r -i --remove --max-filesize=4000M --max-scansize=4000M /DATA1/ I've tried to delete the files suggested in one forum where they suspected corruption in some of those files that is bytecode.cvd, main.cvd, daily.cld and re-download them(with the update tool): root ~ # ls -ahl /usr/local/share/clamav/ total 145M drwxr-sr-x 2 clamav clamav 4.0K Mar 26 04:29 . drwxrwsr-x 10 root staff 4.0K Mar 20 01:59 .. -rw-r--r-- 1 clamav clamav 65K Mar 26 04:29 bytecode.cvd -rw-r--r-- 1 clamav clamav 83M Mar 26 04:29 daily.cld -rw-r--r-- 1 clamav clamav 62M Mar 18 01:17 main.cvd -rw------- 1 clamav clamav 156 Mar 26 04:29 mirrors.dat root ~ # rm -f /usr/local/share/clamav/bytecode.cvd /usr/local/share/clamav/daily.cld /usr/local/share/clamav/main.cvd root ~ # freshclam ClamAV update process started at Thu Mar 26 04:42:21 2015 Downloading main.cvd [100%] main.cvd updated (version: 55, sigs: 2424225, f-level: 60, builder: neo) Downloading daily.cvd [100%] daily.cvd updated (version: 20242, sigs: 1358870, f-level: 63, builder: neo) Downloading bytecode.cvd [100%] bytecode.cvd updated (version: 247, sigs: 41, f-level: 63, builder: dgoddard) Database updated (3783136 signatures) from db.UK.clamav.net (IP: 129.67.1.218) I've also tried to set --max-filesize and --max-scansize lower per the forum post I found here where it states that there is a limit to files/scans size at 2.17GB: clamscan -r -i --remove --max-filesize=2100M --max-scansize=2100M /DATA1/ but it gave the same errors. The program is the latest from the official site: clamav-0.98.6 configured and compiled from source with these options: ./configure --enable-bzip2 I've tried to re-install the program and also at first I had more options set in the compilation(--enable-experimental, --with-dbdir=/usr/local/share/clamav) The last option I know of is to uninstall this version and try the packages from my distributions repositories. But I would like to get this one working if at all possible. UPDATE: I've also tried to install clamav from the repositories but it gives the same problems/errors. I've found this, but it's old and doesn't seem to know what the problem is. And here but still not a definite answer or fix. The drives I've been trying to scan are these: # df -h /dev/sdb1 2.7T 2.6T 115G 96% /DATA1 /dev/sdc1 2.7T 2.6T 165G 95% /DATA2 Here is fdisk: # fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Possible cause: It could be something related to memory/CPU that the system has but I don't have that information I found this which states that clamscan loads the file to scan into memory and if there isn't enough memory it will fail. This is likely what is happening as I'm setting the scanner to scan files up-to 4Gigs and that's how much memory the system has. Excerpt: How big is that file? How much RAM (physical and swap separate, please) is installed on the scanning machine? Currently, ClamAV has a hard file limit of around 2.17GB. Because we're mapping the file into memory, if you don't have enough memory available to map the whole file, the memory mapping code (as currently implemented) will fail and the file won't be scanned. One of our long-term goals is to investigate being able to properly support large files. Possible solution: Hope the above is the problem(not enough memory), then I can simply extend the systems memory to 8GB, but it's unlikely it is so simple because I tried to run those scans on a system with 12GB ram. EDIT #1 Here is a run on another system with Fedora 21 + 12 GB RAM: clamscan -r -i --remove --max-filesize=1700M --max-scansize=1700M --exclude=/proc --exclude=/sys --exclude=/dev / LibClamAV Warning: [Bytecode JIT]: recovered from error LibClamAV Warning: [Bytecode JIT]: JITed code intercepted runtime error! LibClamAV Warning: [Bytecode JIT]: Bytecode run timed out, timeout flag set LibClamAV Warning: Bytcode 27 failed to run: Time limit reached LibClamAV Error: cli_scanxz: premature end of compressed stream LibClamAV Error: cli_scanxz: premature end of compressed stream ----------- SCAN SUMMARY ----------- Known viruses: 3779101 Engine version: 0.98.6 Scanned directories: 101382 Scanned files: 744103 Infected files: 0 Total errors: 18419 Data scanned: 285743.78 MB Data read: 394739.73 MB (ratio 0.72:1) Time: 32171.073 sec (536 m 11 s) when I ran those same scans on it with sizes set to 2100M-4000M it gave the same errors as mentioned in my original question.
I've found this(thanks to @FloHimself): Brief Re-introduction to ClamAV Bytecode Signatures, it's an good overview/supplement of some of the usages of the program and some useful options: Excerpt: Bytecode signatures are a specialized type of ClamAV signature which is able to perform additional processing of the scanned file and allow for more robust detection. Unlike the standard ClamAV signature types, bytecode signatures have a number of unique distinctions which need to be respected for their effective usage. Trust Bytecode signatures, by default, are considered untrusted. In fact, only bytecode signatures published by Cisco, in the bytecode.cvd are considered “trusted”. This means that the ClamAV engine will, by default, never load, trigger or execute untrusted bytecodes. One can bypass this safety mechanism by specifying the bytecode unsigned option to the engine but it should be noted that it is up to the user’s discretion on using untrusted bytecode signatures. For clamscan, the command line option is --bytecode-unsigned. For clamd, one would need to specify BytecodeUnsigned yes to clamd.conf. Timeout Bytecode signatures are designed to only run for a limited amount of time designated by an internal timeout value. If execution time exceeds the value, the bytecode signature’s execution is terminated and the user is notified. The bytecode signature timeout value can be set by the user. For clamscan, the command line is --bytecode-timeout=[time in ms]. For clamd, one would specify BytecodeTimeout [time in ms] to clamd.conf. And this is useful: Issue Reporting If anyone encounters issue with bytecode signatures, whether within the clambc-compiler or within ClamAV, they can report them to https://bugzilla.clamav.net/. Be sure to include the bytecode signature, bytecode source(if possible), and any other pieces of useful information. Answer The key seems to be to set the --bytecode-timeout= high so the scanner has time to scan the whole file. The default value is 60000 milliseconds/60 seconds, and I have set it to 190000 which works and doesn't give the timeout errors. This value could probably be set lower but it works for me. Tested on two systems that had the errors before the setting. UPDATE: Tested on three systems and many scans, the errors are gone with this setting for --bytecode-timeout. Here is the new command: clamscan -r -i --remove --max-filesize=4000M --max-scansize=4000M --bytecode-timeout=190000 /DATA1 Note: I also upgraded the servers memory to 8GB, I'm not sure if clamscan loads the file to memory when it's being scanned but one post said that much and if so that is another consideration.
Warnings/Errors when running clamav/clamscan, scanning 3TB hard-drive
1,478,650,619,000
I've used Linux Mint for a while now and I'm quite the fan. I'm not expert enough to go messing with the kernel or anything like that, but I've noticed small bugs in a couple of software packages that I feel I would be able to fix. However, I have no idea how to begin contributing to the project. Here's a simple example: the calculator app in the Ubuntu repositories does not require NumLock to be activated for key presses on the number pad to be interpreted as numbers (rather than the Home and End keys which use the same physical buttons). However, this is not the case for the Del key which also serves as the decimal point. For this, NumLock does need to be activated. I suspect that this is a bug, and I would like to fix it. It ought to be quite simple. More than simply submitting a bug report, how does one become involved in fixing an issue like this? Would I need to contact the upstream package maintainers directly through the GitHub page?
In increasing order of helpfulness: if you identify a bug, report it with as much relevant information as possible (to make it easy for the maintainers to reproduce and then fix). If you can read the source and identify where the bug occurs, include that information. If you are able to provide a patch that fixes the bug, include that (or open a pull request if the project is hosted on Github) In the case of either 1,2 or 3: make sure that you subscribe to the bug on the tracker/pull request/mailing list etc., so you can respond to any requests from the developers/maintainers to clarify or test your assumptions and report back with any additional information. Nothing is worse than a "drive by" bug report that has insufficient information: these just clutter bug trackers/mailing lists, etc., with noise that has either to be ignored or cleaned up at the cost of energy that could be profitably directed elsewhere in the project.
How do I usefully report a bug [closed]
1,478,650,619,000
I installed mysql on a brand-new Fedora 16 server and it would not start. This is the line from the log file (^G and all): ^G/usr/libexec/mysqld: Can't create/write to file '/tmp/ibNPyIlu' (Errcode: 13) I looked at /tmp/ and it has rather strange-looking permissions: drwxrwxrwt. Why the dot? chmod 1777 does not change anything. Is this responsible for the error? What's next?
This was a bug with mysqld starting with systemd when they made a change to use ServicesPrivateTmp for additional security. When you performed a yum update, the mysql package was updated to mysql-5.5.22-1.fc16 or greater which corrected the issue. Bug 815812 Bug 782513 Description of Fedora's implementation of 'PrivateTmp' feature
Fedora 16 strange /tmp permissions: mysqld will not start
1,478,650,619,000
I ran into the error device-mapper: reload ioctl on osprober-linux-nvme0n1p7 failed: Device or resource busy while compiling the kernel in Ubuntu Studio. I use ZFS for my main drive. Apparently, this is a bug: [zfs-root] "device-mapper: reload ioctl on osprober-linux-sdaX failed: Device or resource busy" against devices owned by ZFS. How can I work around it?
According to the launchpad thread you linked to, it is a cosmetic error caused by os-prober not properly ignoring ZFS-managed drives, and if you're not dual-booting you can safely make the message go away with apt purge os-prober. See also here.
device-mapper: reload ioctl on osprober-linux-nvme0n1p7 failed: Device or resource busy
1,478,650,619,000
I'd like to know how bug fixing exactly works in Linux distributions. I mean, after all a distro is made of opensource software made by external developers, and then packaged by the distro's maintainers. So why every distro has it own bug tracker? Shouldn't these bugs be submitted to the original authors of such softwares?
(I'll refer to original authors or original software as upstream authors and upstream software because that's what I'm used to calling them.) From the end-user's perspective, it's nice to have a single place to report bugs, rather than having to sign up for accounts in various upstream bugtrackers for all the software they use. From an upstream author's perspective, it's nice to be shielded from a distribution's users' bug reports, for a couple of reasons: the distribution's maintainers may introduce bugs themselves (or bugs may occur because of interactions between a distribution's packages), it shouldn't be up to the upstream author to fix those; the distribution may have requirements that the upstream software author doesn't care about or can't handle (e.g. various hardware architectures). Note that this doesn't mean that bugs which are in the upstream software don't get forwarded; if a user files a bug in a distribution bug tracker, and the bug is upstream's responsibility, then the bug will be forwarded to the upstream bug tracker. But usually the distribution maintainer will take care of that. For complex bugs the user may well be instructed to follow up upstream though, to avoid a middle-man. Distribution bug trackers support this quite well, and will update a bug's status automatically as it changes in the upstream bug tracker. From a distribution maintainer's perspective, it's necessary to have some distribution-specific bug tracker to track work to be done in the distribution itself (library version changes, new toolchains, new architectures, new distribution tools...). In addition, in many cases distributions provide support for older versions of packages, where bugs may still exist even though they have already been fixed by the upstream author in newer versions of the software. In that situation, it's somewhat annoying for users to ask upstream authors to fix the bugs, since they're already fixed from upstream's perspective; if the bug is sufficiently annoying, it should be up to the distribution's maintainers to backport the fix. (This is debatable for security fixes in important packages; many upstream provide security fixes for older releases themselves.) A further factor to take into account is that there may no longer be an upstream for some pieces of software which are still important; this was the case for a long time for cron for example. If distributions didn't have their own bug trackers there would be nowhere for users to report bugs in such pieces of software. In most projects all this tends to happen quite naturally, in a friendly fashion: distribution maintainers help upstream fix bugs, and vice versa, and distribution maintainers share bug fixes with other distributions.
How bug fixing exactly works in a distro ? upstream vs downstream
1,478,650,619,000
Is there a way to predict when the next release will be out? I read somewhere that it has to do with number of bugs remaining in the testing branch. Could someone please explain how this works and when the next release will happen based on what variables?
See Debian Release Management; for Debian 9, it stated: As always, Debian 9 “Stretch” will be released “when it’s ready”. and that’s the general rule for all releases. The planned release date for Debian 9, June 17 2017, was announced on May 26 of that year. The planned release date for Debian 10, July 6 2019, was announced on June 11 of that year. (Both releases happened on the planned date.) Debian 11 is currently frozen, and the release is planned for August 14 2021. Generally speaking, you’re right that “when it’s ready” correlates to the number of (release-critical) bugs in the testing distribution, to a large extent. The release team give regular updates on debian-devel-announce, which are linked from the release management page. These updates list the items which still need to be fixed (including bugs, but not only), and explain how you can help; that’s mainly: test the current testing distribution; help triage bugs; help fix bugs. The best way of knowing when a Debian release will happen is to help fix the issues preventing it — as the number of such issues goes down, so does the release date get closer. You can track the release-critical bugs; those which matter for the next release are counted as “number concerning the next release”. Other important ingredients for a Debian release are its installer and its documentation.
How can we predict when the next Debian release will be out?
1,478,650,619,000
I've recently filed a bug with gnome-shell to the GNOME maintainers, on their website (upstream). However, I'm not sure whether I was maybe supposed to file it to the package maintainers of my distribution (Fedora). In the future, which should I prefer for similar programs? Or should I file the bug both upstream and to my distribution maintainers (which doesn't make a lot of sense honestly)?
I would suggest filing the bug report with the distribution's bug tracking system, if you are using their build. They can then escalate the bug report to the upstream maintainer, should it turn out that it exists in a vanilla build as well. The rationale behind this is simply that since many distributions apply patches of their own, unless you are certain that the bug exists in a vanilla build, the packager is likely in a better position to be able to test both possible configurations (vanilla and patched) than an upstream developer who might even be running their system on a completely different architecture that your distribution of choice doesn't even support. Depending on the complexity of the program and what kind of unexplainable behavior you are seeing, it might even make sense to file a bug against the distribution's bug tracker even if you are using a vanilla build of the program in question but patched versions of any dependencies. You can certainly escalate the bug to the upstream maintainer if you get no response from the distribution's package maintainer for a reasonable amount of time. In that case, include a link to the original report as well, for context, and cross-reference in the distribution's bug tracking system so that it is easy to go from one to the other. Bottom line: don't bother the upstream maintainers unless it's a problem with their code or the distribution maintainer is completely unresponsive.
Should I file bugs upstream, to my distribution maintainers, or both?
1,478,650,619,000
After fresh installing Debian buster OS and Package: command-not-found running command: $ curl Could not find the database of available applications, run update-command-not-found as root to fix this Sorry, command-not-found has crashed! Please file a bug report at: http://www.debian.org/Bugs/Reporting Please include the following information with the report: command-not-found version: 0.3 Python version: 3.7.3 final 0 Distributor ID: Debian Description: Debian GNU/Linux 10 (buster) Release: 10 Codename: buster Exception information: local variable 'cnf' referenced before assignment Traceback (most recent call last): File "/usr/share/command-not-found/CommandNotFound/util.py", line 23, in crash_guard callback() File "/usr/lib/command-not-found", line 93, in main if not cnf.advise(args[0], options.ignore_installed) and not options.no_failure_msg: UnboundLocalError: local variable 'cnf' referenced before assignment Issuing update-command-not-found as root does not fix the problem. There is bug report, but seems no fix yet.
Not intuitive, but error goes away immediately after apt update: # apt update Hit:1 http://deb.debian.org/debian buster InRelease Get:2 http://deb.debian.org/debian buster-updates InRelease [49.3 kB] Hit:3 http://security.debian.org/debian-security buster/updates InRelease Get:4 http://deb.debian.org/debian buster/main amd64 Contents (deb) [36.1 MB] Get:5 http://deb.debian.org/debian buster-updates/main amd64 Contents (deb) [42.3 kB] Fetched 36.2 MB in 7s (5,009 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. # curl Command 'curl' not found, but can be installed with: apt install curl PS. For those curious, the reason for that is missing db upon fresh install: ls -l /var/lib/command-not-found total 0 and after apt update we have: ls -l /var/lib/command-not-found total 2504 -rw-r--r-- 1 root root 2560000 Jul 29 12:34 commands.db -rw-r--r-- 1 root root 983 Jul 29 12:34 commands.db.metadata
Debian command-not-found error - local variable 'cnf' referenced before assignment
1,478,650,619,000
I know that with RS= we can set the Record Separator to a Null/Empty string; however GNU awk also allows to define the RS as regex, so I decided to use RS='|' and I would expecting gawk to understand this as the same as RS= meaning that "empty-string (or|) empty-string", but that treat it as literal | character, while when I do RS='X|Y' that correctly recognize it's a regex (X or Y). Would someone please explain what's happening in RS='|' that awk doesn't get it as empty-string? I also tried RS='(|)' but this acts completely a different thing and I see it consider whole input as a single record.
By definition RS='|' is a literal |. Any single character RS is treated as literal for portability across all awks, otherwise you'd have a script with RS='|' behaving differently in gawk vs a POSIX awk. So a single char RS is literal while a multi-char string as an RS is a regexp if the awk version supports it, otherwise it's literally just the first char of the string (so RS='.' is always a literal . while RS='.x' is any char followed by x in some awks and a literal . in others). By the way, in any other regexp context a single | is undefined behavior per POSIX but many tools will treat it as a literal | and the same goes for regexp repetition chars like * and ?. As for RS='(|)' - that means "null or null" which is the same as "null" which you could alternatively write as (). Seems like that'd match around all characters, I don't know why it doesn't. Different tools seem to recognize that regexp differently: $ printf 'foo\n' | sed -E 's/()/x/g' xfxoxox $ printf 'foo\n' | grep -Eo '()' $ $ printf 'foo\n' | awk '{gsub(/()/,"x")} 1' xfxoxox $ printf 'foo\n' | awk -v RS='()' -v ORS='x\n' '1' foox I contacted the GNU Awk developers (see https://lists.gnu.org/archive/html/bug-gawk/2021-01/msg00003.html) and 2 things came out of it: You must not use a multi-char regexp that matches a null string as a Record Separator or as a Field Separator. If you do it will be treated is if the RS or FS does not exist and you will end up with a single record for the whole input (for RS) or a single field for the whole record (for FS). That will be explicitly stated in a future release of the gawk manual. There is a bug in gawk 5.1.0 (maybe earlier too, I don't know) that is causing the terminating character to be consumed when the above statement was ignored. A fix has now been written for that and will be in a future gawk version.
awk- empty Record Separator: "RS=" vs "RS='|'" vs "RS=(|)"
1,478,650,619,000
I was trying to move a set of 7 files to my computer, via mv g* dir. The command line moved 6 of them, and for the last file gave the following error: mv: g.tex: Argument list too long Since the other files, both those before and after it, are already moved, I tried mv g.tex dir. Same error. Moving other files works fine. (Note: g.tex is a file, not a directory.) Update: Renaming the file via mv also works fine; moving it to another directory on the USB drive also works fine. However, even when I rename it, or move it to another directory on the USB drive, I still cannot move it to my computer. I tried to cat this file, to copy its contents to the desktop: cat: g.tex: Argument list too long What else might be causing this problem? Update: after comparing output of dtruss with a file which successfully moved, here are the lines of the log which differ: read(0x3, "\0", 0x20000) = -1 Err#7 write_nocancel(0x2, "mv: \0", 0x4) = 4 0 getrlimit(0x1008, 0x7FFF5A00BC78, 0x4) = 0 0 write_nocancel(0x2, "g.tex\0", 0x5) = 5 0 write_nocancel(0x2, ": \0", 0x2) = 2 0 write_nocancel(0x2, "Argument list too long\n\0", 0x17) = 23 0 unlink("/Users/username/Desktop/Tex/g.tex\0", 0x7FFF5A00B8A0, 0x17) = 0 0 close(0x3) = 0 0 From the list of Unix error codes for read: #define E2BIG 7 /* Argument list too long */ On a successful move, it displays instead: read(0x3, "Beginning of file contents...", 0x20000) = 0 0 fstat64_extended(0x3, 0x7FF1F5C02568, 0x7FF1F5C02660) = 0 0 fstat64(0x4, 0x7FFF5A653EF0, 0x7FF1F5C02660) = 0 0 fchmod(0x4, 0x180, 0x7FF1F5C02660) = 0 0 __mac_syscall(0x7FFF8E670D02, 0x52, 0x7FFF5A653E70) = -1 Err#93 flistxattr(0x4, 0x0, 0x0) = 0 0 flistxattr(0x3, 0x0, 0x0) = 23 0 flistxattr(0x3, 0x7FF1F5C02490, 0x17) = 23 0 fgetxattr(0x3, 0x7FF1F5C02490, 0x0) = 11 0 fgetxattr(0x3, 0x7FF1F5C02490, 0x7FF1F6001000) = 11 0 fsetxattr(0x4, 0x7FF1F5C02490, 0x7FF1F6001000) = 0 0 fstat64_extended(0x4, 0x7FFF5A653628, 0x7FF1F5C02660) = 0 0 fchmod_extended(0x4, 0xFFFFFF9B, 0xFFFFFF9B) = 0 0 fchmod(0x4, 0x0, 0xFFFFFF9B) = 0 0 close(0x3) = 0 0 fchown(0x4, 0x6300000063, 0x63) = 0 0 fchmod(0x4, 0x81FF, 0x63) = 0 0 fchflags(0x4, 0x0, 0x63) = 0 0 utimes("/Users/aleksander/Desktop/Tex/new_filename\0", 0x7FFF5A654860, 0x63) = 0 0 Just in case this helps, the remainder of the lines, which match for a successful mv command and for the failed one, right before the differing text quoted above: open("/dev/dtracehelper\0", 0x2, 0x7FFF53E619B0) = 3 0 ioctl(0x3, 0x80086804, 0x7FFF53E61938) = 0 0 close(0x3) = 0 0 thread_selfid(0x3, 0x80086804, 0x7FFF53E61938) = 167920154 0 bsdthread_register(0x7FFF8E8710F4, 0x7FFF8E8710E4, 0x2000) = 1073741919 0 ulock_wake(0x1, 0x7FFF53E6116C, 0x0) = -1 Err#2 issetugid(0x1, 0x7FFF53E6116C, 0x0) = 0 0 mprotect(0x10BDA5000, 0x88, 0x1) = 0 0 mprotect(0x10BDA7000, 0x1000, 0x0) = 0 0 mprotect(0x10BDBD000, 0x1000, 0x0) = 0 0 mprotect(0x10BDBE000, 0x1000, 0x0) = 0 0 mprotect(0x10BDD4000, 0x1000, 0x0) = 0 0 mprotect(0x10BDD5000, 0x1000, 0x1) = 0 0 mprotect(0x10BDA5000, 0x88, 0x3) = 0 0 mprotect(0x10BDA5000, 0x88, 0x1) = 0 0 getpid(0x10BDA5000, 0x88, 0x1) = 28838 0 stat64("/AppleInternal/XBS/.isChrooted\0", 0x7FFF53E61028, 0x1) = -1 Err#2 stat64("/AppleInternal\0", 0x7FFF53E610C0, 0x1) = -1 Err#2 csops(0x70A6, 0x7, 0x7FFF53E60B50) = 0 0 sysctl([CTL_KERN, 14, 1, 28838, 0, 0] (4), 0x7FFF53E60CA8, 0x7FFF53E60CA0, 0x0, 0x0) = 0 0 ulock_wake(0x1, 0x7FFF53E610D0, 0x0) = -1 Err#2 csops(0x70A6, 0x7, 0x7FFF53E60430) = 0 0 stat64("/Users/aleksander/Desktop/Tex\0", 0x7FFF53E62B88, 0x7FFF53E60430) = 0 0 lstat64("g.tex\0", 0x7FFF53E62AF8, 0x7FFF53E60430) = 0 0 lstat64("/Users/aleksander/Desktop/Tex\0", 0x7FFF53E62A68, 0x7FFF53E60430) = 0 0 stat64("g.tex\0", 0x7FFF53E62AF8, 0x7FFF53E60430) = 0 0 stat64("/Users/aleksander/Desktop/Tex/g.tex\0", 0x7FFF53E62A68, 0x7FFF53E60430) = -1 Err#2 access("/Users/aleksander/Desktop/Tex/g.tex\0", 0x0, 0x7FFF53E60430) = -1 Err#2 rename("g.tex\0", "/Users/aleksander/Desktop/Tex/g.tex\0") = -1 Err#18 stat64("/\0", 0x7FFF53E5FB60, 0x7FFF53E60430) = 0 0 open_nocancel(".\0", 0x0, 0x1) = 3 0 fstat64(0x3, 0x7FFF53E5F900, 0x1) = 0 0 fcntl_nocancel(0x3, 0x32, 0x7FFF53E61980) = 0 0 close_nocancel(0x3) = 0 0 stat64("/Volumes/NO NAME\0", 0x7FFF5A00A870, 0x7FFF5A00C980) = 0 0 stat64("/Volumes/NO NAME\0", 0x7FFF5A00AB60, 0x7FFF5A00C980) = 0 0 getattrlist("/Volumes/NO NAME/g.tex\0", 0x7FFF8E715B04, 0x7FFF5A00C470) = 0 0 statfs64(0x7FFF5A00C980, 0x7FFF5A00CD88, 0x7FFF5A00C470) = 0 0 lstat64("g.tex\0", 0x7FFF5A00C8F0, 0x7FFF5A00C470) = 0 0 open("g.tex\0", 0x0, 0x0) = 3 0 open("/Users/aleksander/Desktop/Tex/g.tex\0", 0xE01, 0x0) = 4 0 fstatfs64(0x4, 0x7FFF5A00BFF8, 0x0) = 0 0 xattr -l g.tex doesn't give any output. ls -l g.tex yields: -rwxrwxrwx 1 username staff 159939 Aug 15 11:54 g.tex mount yields: /dev/disk5s1 on /Volumes/NO NAME (msdos, local, nodev, nosuid, noowners)
E2BIG is not one of the errors that read(2) may return. It looks like a bug in the kernel. Pure speculation, but it could be down to some corruption on the file system and the macOS driver for the FAT filesystem returning that error upon encountering that corruption which eventually makes it through to the return of read. In any case, it looks like you've taken the investigation as far as it gets. Going further would require dissecting the file system and the kernel driver code. You could have a look at the kernel logs to see if there's more information there. You could try mounting the FS on a different OS. Or use the GNU mtools to access that FAT filesystem. You could also report the problem to Apple as at least a documentation issue (to include E2BIG as one of the possible error codes, and the conditions upon which it may be returned).
mv `Argument list too long` for a single file
1,478,650,619,000
With reference of this Q&A on AU. Why behavior of GNU grep using -Pz parameters changed and doesn't support start of line ^ and $ end of line anchors? Is this a bug or correct behavior? Tested on Ubuntu 16.04 with kernel version 4.4.0-21-generic. $ echo ^ | grep -Pz ^ grep: unescaped ^ or $ not supported with -Pz
This is desired behavior of GNU grep version 2.24 (released on March 10 2016) and above, and that's the fix for the bug which was introduced in GNU grep 2.5. Looking into the source code: if (*p == '$' || (*p == '^' && !after_unescaped_left_bracket)) die (EXIT_TROUBLE, 0, _("unescaped ^ or $ not supported with -Pz")); This change was made on Feb 21 2016, see this bug report for more details about this change. Though that's GNU grep choice, it's a bug, as GNU grep compiles the PCRE regex with PCRE_MULTILINE set, and also reverted to calling pcre_exec for more than one record at a time, which is source of problem, as pointed out by Stéphane Chazelas
grep command doesn't support start '^' and '$' end of line anchors when it's with -Pz
1,478,650,619,000
I seem to have two Bash cursors. The initial position of what I deem the "extra" cursor seems to depend on the length of my prompt, with longer prompts causing it to start farther behind. When I type, the extra cursor falls behind by a pixel or two with each keystroke. This is weird and hard to describe, so I made a GIF: What's going on here? I'm using xterm 4.4.0 embedded in Theia IDE 1.3.0.
Turns out I'm experiencing this bug: https://github.com/eclipse-theia/theia/issues/8158 EDIT: Theia 1.4 was released sometime in the last 24 hours, and it resolves my issue.
I seem to have two Bash cursors. What's going on here?
1,478,650,619,000
When I was using Debian, there was a tool called reportbug. Ironically, the tool itself was probably the buggiest program I have ever used. I was never, ever actually able to make a successful bug report in the graphical mode. However, the tool seems to be a good idea. I don't have to go through some online identification process, or do any browser-business - I'd just fire up a program, fill out a few forms, and that was it. Is there something like that for the Fedora operating system? Or for Gnome itself? Because, sometimes, I encounter weird bugs and errors, and I think it would come in handy to have such a tool at my fingertips.
Command Line Tools There is a Fedora-specific command line interface to bugzilla provided by the python-bugzilla package. This may be the closest to thing to Debian's reportbug. As an alternative, you can try the generic command line interface provided by the pybugz package. However, this is not a Fedora-specific tool. There is another Fedora-specific command line tool called perl-Fedora-Bugzilla. However, this package has been orphaned and it may not be compatible with the latest version of bugzilla that Fedora is using1. GUI Tool Fedora uses Automatic Bug Reporting Tool (abrt). In most installations it is enabled and running by default. It is supposed to automatically fire a notification when a program crashes and guide you through the appropriate steps to file a bug to bugzilla.redhat.com. You can launch it manually and review the previous crash reports by going to Activities and typing abrt. Unfortunately, this tool cannot be used to create whatever bug you want. The bug must be based on a crash that the tool noticed. 1 See Bug 823417
Debian's ReportBug equivalent on Fedora/Gnome?
1,478,650,619,000
I have a really strange audio problem on my Fedora distro (Fedora 35 Workstation Edition), installed on my laptop. Previously I used Windows 10 on my laptop, and I had no audio problem whatsoever. Then I installed Fedora, and after approximately one month I stared having the following symptoms: Some days, not all days, just some days, the audio output simply stops working. Completely mute. No speakers, no headset, no nothing. The volume indicator on the top right side of the screen drops to zero on its own. I can bring the volume slider back up, but the audio still does not work, and when I connect a new audio device, such as headsets, the system volume drops back to zero; but rising it manually has again no effect, still no sound. Sometimes restarting the machine fixes the problem, sometimes it does not! And it takes more than one restart. Sometimes it happens when I put a video on pause, sometimes it happens at startup. Some days the audio works like a charm. What is going on??? I cannot find anything online to help me fix this. I have tried killing and then starting pulseaudio back up. I have tried re-installing pulse audio. I have tried everything but nothing seems to work! Should I re-install Fedora from scratch? I would really like not doing that.. There has to be a way to find out what is going on and fix this!
I found the solution, thanks to this other answer! Turns out you just need to execute the following in terminal: sudo dnf swap --allowerasing pulseaudio pipewire-pulseaudio Problem gone! Seems like it was a problem with Pipewire, I have really no idea why a bug like this is present in Fedora 35.. I have read online that Pipewire seems not ready to be implemented on a full scale on Fedora. Still somebody should really do something about this, I spent weeks before finding this solution.. And I am also not sure why I encountered this problem while so many other users do not face any problem at all with their audio on Fedora. In any case I hope this answer will be useful to others, spear them some of the hassle I had to put up with.
Inconsistent Fedora 35 audio problem
1,478,650,619,000
I've been using reportbug in novice mode on Debian 9 and needed to cancel the report because no editor was installed (on a Docker image). The last interaction was Submit this report on postgresql (e to edit) [y|n|a|c|E|i|l|m|p|q|d|t|s|?]? n Saving a backup of the report at /tmp/reportbug-postgresql-backup-20180226-11446-cwjfs5eu Bug report written as /tmp/reportbug-postgresql-20180226-11446-mrfjtcvz Now, I don't seem to find a way to open the draft again based on the output of reportbug --help (draftpath seems to be for storage of new drafts only): Usage: reportbug [options] Options: --version show program's version number and exit -h, --help show this help message and exit -c, --no-config-files do not include conffiles in report -C CLASS, --class=CLASS specify report class for GNATS BTSes -d, --debug send report only to yourself --test operate in test mode (maintainer use only) -e EDITOR, --editor=EDITOR specify an editor for your report -f SEARCHFOR, --filename=SEARCHFOR report the bug against the package containing the specified file --from-buildd=BUILDD_FORMAT parse information from buildd format: $source_$version --path only search the path with -f -g, --gnupg, --gpg sign report with GNU Privacy Guard (GnuPG/gpg) -G, --gnus send the report using Gnus --pgp sign report with Pretty Good Privacy (PGP) -K KEYID, --keyid=KEYID key ID to use for PGP/GnuPG signatures -H HEADERS, --header=HEADERS add a custom RFC2822 header to your report -P PSEUDOS, --pseudo-header=PSEUDOS add a custom pseudo-header to your report --license show copyright and license information -m, --maintonly send the report to the maintainer only -M, --mutt send the report using mutt --mirror=MIRRORS add a BTS mirror -n, --mh, --nmh send the report using mh/nmh -N, --bugnumber specify a bug number to look for --mua=MUA send the report using the specified mail user agent --mta=MTA send the report using the specified mail transport agent --list-cc=LISTCC send a copy to the specified address -p, --print output the report to standard output only --report-quiet file report without any mail to the maintainer or tracking lists -q, --quiet reduce the verbosity of the output -s SUBJECT, --subject=SUBJECT the subject for your report -x, --no-cc do not send a copy of the report to yourself -z, --no-compress do not strip blank lines and comments from config files -o OUTFILE, --output=OUTFILE output the report to the specified file (both mail headers and body) -O, --offline disable all external queries -i INCLUDE, --include=INCLUDE include the specified file in the report -A ATTACHMENTS, --attach=ATTACHMENTS attach the specified file to the report -b, --no-query-bts do not query the BTS for reports --query-bts query the BTS for reports -T TAGS, --tag=TAGS add the specified tag to the report --http_proxy=HTTP_PROXY, --proxy=HTTP_PROXY use this proxy for HTTP accesses --email=EMAIL specify originating email address --realname=REALNAME specify real name for your report --smtphost=SMTPHOST specify SMTP server for mailing --tls use TLS to talk to SMTP servers --source, --src report the bug against the source package --smtpuser=SMTPUSER username to use for SMTP --smtppasswd=SMTPPASSWD password to use for SMTP --replyto=REPLYTO, --reply-to=REPLYTO specify Reply-To address for your report --query-source query on source packages, not binary packages --no-query-source query on binary packages only --security-team send the report only to the security team, if tag=security --no-security-team do not send the report only to the security team, if tag=security --debconf include debconf settings in your report --no-debconf exclude debconf settings from your report -j JUSTIFICATION, --justification=JUSTIFICATION include justification for the severity of your report -V PKGVERSION, --package-version=PKGVERSION specify the version number for the package -u INTERFACE, --interface=INTERFACE, --ui=INTERFACE choose which user interface to use -Q, --query-only only query the BTS -t TYPE, --type=TYPE choose the type of report to file -B BTS, --bts=BTS choose BTS to file the report with -S SEVERITY, --severity=SEVERITY identify the severity of the report --template output a template report only --configure reconfigure reportbug for this user --check-available check for new releases on various sites --no-check-available do not check for new releases --mode=MODE choose the operating mode for reportbug -v, --verify verify integrity of installed package using debsums --no-verify do not verify package installation -k, --kudos send appreciative email to the maintainer, rather than filing a bug report --body=BODY specify the body for the report as a string --body-file=BODYFILE, --bodyfile=BODYFILE use the specified file as the body of the report -I, --no-check-installed don't check whether the package is installed --check-installed check whether the specified package is installed when filing a report (default) --exit-prompt prompt before exiting --paranoid show contents of message before sending --no-paranoid don't show contents of message before sending (default) --no-bug-script don't execute the bug script (if present) --draftpath=DRAFTPATH Save the draft in this directory --timeout=TIMEOUT Specify the network timeout, in seconds [default: 60] --no-cc-menu don't show additional CC menu --no-tags-menu don't show tags menu --mbox-reader-cmd=MBOX_READER_CMD Specify the program to open the reports mbox. --max-attachment-size=MAX_ATTACHMENT_SIZE Specify the maximum size in byte for an attachment [default: 10485760]. --latest-first Order bugs to show the latest first --envelope-from=ENVELOPEFROM Specify the Envelope From (Return-path) address used to send the bug report Specifying the two files in /tmp as filename fails due to No packages match. No package specified or we were unable to find it in the apt cache; stopping. which might be wrong or right depending on what this unexplained argument expects as input. I'm aware that it's way more easy to create a new report. I'm asking this for reference. I'm pretty sure I reported this once, but unfortunately was too honest about the integration test coverage and documentation review of reportbug (such problems simply shouldn't happen if you want to improve a FLOSS project), so the maintainer closed all of my otherwise constructive reports. I'm sure there's a lesson to be learned from this, but I'm still not certain which one...
Unfortunately there is no way to open a draft bug report in reportbug. This has been reported several times, and one of the bug reports gives a solution (assuming your system is configured in such a way that sendmail works): edit the draft in your favourite text editor, then send it using sendmail -t < bugdraft That’s not much help on many systems nowadays... Some mail clients can import a message, that’s another possible approach.
How to open a draft in reportbug after canceling the reporting before?
1,478,650,619,000
Red Hat Enterprise Linux 9 is no longer listed as a product under bugzilla at https://bugzilla.redhat.com/enter_bug.cgi?classification=Red%20Hat There are no onward links indicating the issuer tracker has moved, or where it has moved to. Google comes up with nothing. Anyone know where bugs are reported in Red Hat Enterprise Linux 9 in September 2023?
RHEL moved from bugzilla to Jira see the RHEL project at issues.redhat.com. I didn't find an official announcement, only this thread on Fedora Devel mailing list announcing the move in 2022. CentOS Wiki now also points to Red Hat Jira for reporting bugs against CentOS Stream 8 to 10. Edit: An update about the migration was posted on Fedora Devel mailing list with more details about reporting new bug reports for RHEL/CentOS and migrating existing reports to Jira: All new issues found or desired in RHEL (Or CentOS Stream) need to be filed on issues.redhat.com. It's no longer possible to create new BZs for current RHEL (7 through 9) releases. Over the next few weeks, most RHEL BZs will be migrated to tickets in the RHEL project on issues.redhat.com. The BZs that are migrated will be closed with resolution MIGRATED and a pointer to the Jira issue included in the external links section of each respective BZ.
Where are RHEL9 bugs reported in 2023?
1,478,650,619,000
I try to upgrade a Debian Bullseye but I get an anxious warning message : % sudo aptitude upgrade Resolving dependencies... The following NEW packages will be installed: linux-headers-5.10.0-18-amd64{a} linux-headers-5.10.0-18-common{a} linux-image-5.10.0-18-amd64{a} The following packages will be REMOVED: sse3-support{u} The following packages will be upgraded: avahi-autoipd avahi-daemon base-files bind9-dnsutils bind9-host bind9-libs chromium chromium-common chromium-sandbox clamav clamav-base clamav-freshclam cri-tools curl dpkg dpkg-dev fig2dev firefox-esr firefox-esr-l10n-fr fonts-opensymbol gir1.2-ayatanaappindicator3-0.1 gir1.2-gdkpixbuf-2.0 gir1.2-javascriptcoregtk-4.0 gir1.2-lokdocview-0.1 gir1.2-webkit2-4.0 gping grub-common grub-pc grub-pc-bin grub2-common krb5-locales kubeadm kubectl kubelet kubernetes-cni libavahi-client3 libavahi-common-data libavahi-common3 libavahi-compat-libdnssd1 libavahi-core7 libavahi-glib1 libayatana-appindicator1 libayatana-appindicator3-1 libc-bin libc-dev-bin libc-devtools libc-l10n libc6 libc6-dev libclamav9 libcurl3-gnutls libcurl4 libdatetime-timezone-perl libdpkg-perl libexpat1 libexpat1-dev libgdk-pixbuf-2.0-0 libgdk-pixbuf-2.0-dev libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgssapi-krb5-2 libhttp-daemon-perl libhttp-parser2.9 libjavascriptcoregtk-4.0-18 libjs-bootstrap4 libjuh-java libjurt-java libk5crypto3 libkrb5-3 libkrb5support0 liblibreoffice-java liblibreofficekitgtk libnss-myhostname libnss-systemd libpam-systemd libpcre2-16-0 libpcre2-32-0 libpcre2-8-0 libpcre2-dev libpcre2-posix2 libpoppler-cpp0v5 libpoppler-glib8 libpoppler-qt5-1 libpoppler102 libpq5 libreoffice-base-core libreoffice-calc libreoffice-common libreoffice-core libreoffice-draw libreoffice-gnome libreoffice-gtk3 libreoffice-help-common libreoffice-help-en-us libreoffice-help-fr libreoffice-impress libreoffice-l10n-fr libreoffice-math libreoffice-style-colibre libreoffice-style-elementary libreoffice-writer libreofficekit-data libridl-java libsystemd0 libudev1 libuno-cppu3 libuno-cppuhelpergcc3-3 libuno-purpenvhelpergcc3-3 libuno-sal3 libuno-salhelpergcc3-3 libunoloader-java libwebkit2gtk-4.0-37 libxnvctrl0 libxslt1.1 linux-compiler-gcc-10-x86 linux-headers-amd64 linux-image-amd64 linux-kbuild-5.10 linux-libc-dev locales poppler-utils publicsuffix python3-uno systemd systemd-sysv systemd-timesyncd thunderbird tzdata udev uno-libs-private ure virtualbox virtualbox-dkms virtualbox-qt zlib1g zlib1g-dev The following packages are RECOMMENDED but will NOT be installed: libnss-nis libnss-nisplus 136 packages upgraded, 3 newly installed, 1 to remove and 0 not upgraded. Need to get 0 B/540 MB of archives. After unpacking 400 MB will be used. Do you want to continue? [Y/n/?] y Retrieving bug reports... Done Parsing Found/Fixed information... Done critical bugs of libc6 (2.31-13+deb11u3 -> 2.31-13+deb11u4) <Outstanding> b1 - #1019855 - Fwd: libc6: immediately crashes with SIGILL on 4th gen Intel Core CPUs (seems related to AVX2 instructions), bricking the whole system grave bugs of grub-pc (2.04-20 -> 2.06-3~deb11u2) <Outstanding> b2 - #1019564 - (during upgrade) grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet.. Summary: libc6(1 bug), grub-pc(1 bug) Are you sure you want to install/upgrade the above packages? [Y/n/?/...] So, what’s the matter with libc6 and grub-pc, is the problem as dangerous as apt tell? What concretely can happens? Can I by the way say “yes” and keep going?
You can see the bug details in the Debian BTS: 1019564 and 1019855. The libc6 bug only affects fourth-generation Intel systems (Haswell), so if your CPU isn’t a Haswell CPU you can ignore it. There hasn’t been much feedback from the original reporter so it’s not clear how serious the bug actually is. For what it’s worth, my Haswell system is running fine with the upgraded libc6. The grub-pc bug seems to be tied to a rather specific setup, you’re unlikely to run into it (I didn’t). You’ll have to make up your own mind, but as far as I’m concerned it’s safe to press Y and upgrade the packages. If you’re worried about these bugs, you can put the affected packages on hold and upgrade the rest of the upgradable packages.
Bug warning with aptitude upgrade #1019855 & #1019564
1,478,650,619,000
I am asking, because fragment of file /boot/grub/grub.cfg looks like ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### I don't understood, how this should work, because there is no "source" command in grub2 - see http://www.gnu.org/software/grub/manual/grub.html source if the command of /bin/sh shell. I think, that this fragment should include ${config_directory}/custom.cfg during the building of /boot/grub/grub.cfg (with grub-mkconfig): [user@localhost ~]$ cat /etc/grub.d/41_custom #!/bin/sh cat <<EOF if [ -f \${config_directory}/custom.cfg ]; then source \${config_directory}/custom.cfg elif [ -z "\${config_directory}" -a -f \$prefix/custom.cfg ]; then source \$prefix/custom.cfg; fi EOF but it doesn't! it just inserts the text with the "source" command...
here is the description of mapping from command to modulename.mod http://blog.fpmurphy.com/2010/06/grub2-modules.html?output=pdf grep -E "^source" /boot/grub/i386-pc/command.lst source: configfile grep -E "^\.:" /boot/grub/i386-pc/command.lst .: configfile here is the code of function: http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/commands/configfile.c#n61 So, "source" is just an undocumented command of grub2
how /etc/grub.d/41_custom is supposed to work?
1,478,650,619,000
A month ago, a regression happened in Debian 12 that caused dead keys not to work for a while. It was a bug and they corrected it. Things came back mostly to normal now, except for ` + A that doesn't type À anymore. Today, it produces: `A (if you have a VM with a previous Debian 12 version before May, or a Debian 11, you can check that it was working once: you got the À) I would like to warn the Debian team that the resolution of the bug #1070745 isn't finished. Because I'm not sure they are aware of this. But I don't know the steps to follow to add such an entry to this issue. Just send an e-mail? To which destination, with what title and what formalism? Can you give me clues to do my report correctly?
In this instance I would report a new bug, referring to #1070745: the symptoms aren’t the same as the original bug. To do this, use reportbug; since you know it’s probably caused by libglib2.0-0, specify that: reportbug libglib2.0-0 Include all the information you have. The difference between Debian 11 and 12 might be significant — the original bug affected both. If you can’t use reportbug, send a plain text email to [email protected] with Package: libglib2.0-0 Version: 2.74.6-2+deb12u3 at the top of the body. There’s no particular formalism, you can copy what you’ve written in your question. The title should summarise the issue: “Some dead keys no longer combine after fix for 1070745” for example. See also How to report a bug in Debian.
How to notify the Debian team about an issue whose resolution looks unfinished? About dead keys bug, typing: ` + A gives `A and not À like before
1,478,650,619,000
After an update/ upgrade on Debian 12 (weekly update), there is NO WiFi, files are not opening, unable to shutdown or sleep, and freezes when using sudo. The bug is fixed with a new release of kernel. But, I am NOT able to install the new kernel. I downloaded the new kernel file to a flash drive using another laptop Copied it to the freezing laptop But on hitting the command: $ sudo apt install ./linux-image-6.1.0-16-amd64_6.1.67-1_amd64.deb terminal freezes without prompting for password. EDIT: Tried to edit $ nano /etc/default/grub # GRUB_DEFAULT=0 GRUB_DEFAULT="1>2" But it could not be saved since I did NOT use 'sudo'. But, when I use sudo $ sudo nano /etc/default/grub It freezes without prompting for password (may be because of Network Manager issue) Any help?
I got the trick from @Jaromanda X and details from u/Hendersen43 and u/Only_Space7088 at: https://www.reddit.com/r/debian/comments/18i60wx/networkmanager_service_freezes_the_whole_pc_after/ I am giving details to help newbees...: I solved it by: (1) Interrupting booting process, and changing the kernel back to the previous version: While booting, press F9 to interrupt the booting Then, use arrow keys on keyboard to highlight the previous kernel version and press ENTER (2) Download the patched kernel (linux-image) from the site (from the above reddit posting): https://ftp.debian.org/debian/pool/main/l/linux-signed-amd64/linux-image-6.1.0-16-amd64_6.1.67-1_amd64.deb (3) Change Directory to Downloads folder (where the image is downloaded) (4) Installing it using the command (from the above reddit posting): me@debian~/Downloads$ sudo apt install ./linux-image-6.1.0-16-amd64_6.1.67-1_amd64.deb (5) Updating and then upgrading to the matching headers (linux-headers) $ sudo apt update $ sudo apt upgrade EDIT: I was unable to make changes in any system file nor install/ remove kernel, because on hitting 'sudo' the system hangs without prompting for password. @GAD3R: Thank you for your detailed answer. But I was unable to make any change in any file or install anything since 'sudo' was disabled. @garethTheRed: thank you
After an update (a kernel bug it seems) Debian 12 hangs when hitting sudo: Unable to install the new kernel with bug fixed
1,478,650,619,000
EDIT: the bug disappears by version 4.3.8. I am using GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu). I believe I have found a bug. Would like to know if perhaps I'm missing something or if my bug is version/platform specific. Bash's history functions will utilize the HISTTIMEFORMAT variable, if defined. So if HISTTIMEFORMAT=%s Then, history produces: 60 1460542926 history Additionally, history -w results in the history file containing: #1460543065 cat $HISTFILE #1460543082 HISTTIMEFORMAT=%s #1460543084 history -w However, if the variable is defined in this way: : ${HISTTIMEFORMAT:=%s } then the output from history is correct, but history -w fails to write the timestamp headers to $HISTFILE. unset HISTTIMEFORMAT : ${HISTTIMEFORMAT:=%s } history -w If I then simply do export HISTTIMEFORMAT or declare HISTTIMEFORMAT, the problem goes away. However, if the variable is instead auto-exported via set -a, it doesn't work. I could not reproduce this kind of result with a different variable, PS2. From version 4.3.8 running on an Mint 17 / Ubuntu system Method 1 $ bash --version GNU bash, version 4.3.8(1)-release (x86_64-pc-linux-gnu) $ bash --norc bash-4.3$ HISTFILE=/tmp/histfile.$$ bash-4.3$ history -c bash-4.3$ HISTTIMEFORMAT="%s " bash-4.3$ history 1 1460642608 HISTTIMEFORMAT="%s " 2 1460642610 history bash-4.3$ history -w bash-4.3$ cat $HISTFILE #1460642608 HISTTIMEFORMAT="%s " #1460642610 history #1460642612 history -w bash-4.3$ Method 2 $ bash --norc bash-4.3$ HISTFILE=/tmp/histfile.$$ bash-4.3$ history -c bash-4.3$ : ${HISTTIMEFORMAT:="%s "} bash-4.3$ history 1 1460642758 : ${HISTTIMEFORMAT:="%s "} 2 1460642763 history bash-4.3$ history -w bash-4.3$ cat $HISTFILE #1460642758 : ${HISTTIMEFORMAT:="%s "} #1460642763 history #1460642769 history -w bash-4.3$ From RHEL6 and RHEL7 systems including GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu) and version 4.1.2(1)-release (x86_64-redhat-linux-gnu) Method 1 ~$ bash --version GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ~$ bash --norc bash-4.1$ HISTFILE=/tmp/histfile.$$ bash-4.1$ history -c bash-4.1$ HISTTIMEFORMAT="%s " bash-4.1$ history -w bash-4.1$ cat $HISTFILE #1460643571 HISTTIMEFORMAT="%s " #1460643573 history -w bash-4.1$ exit Method 2 ~$ bash --norc bash-4.1$ HISTFILE=/tmp/histfile.$$ bash-4.1$ history -c bash-4.1$ : ${HISTTIMEFORMAT:="%s "} bash-4.1$ history -w bash-4.1$ cat $HISTFILE : ${HISTTIMEFORMAT:="%s "} history -w bash-4.1$ history 3 1460643602 : ${HISTTIMEFORMAT:="%s "} 4 1460643606 history -w 5 1460643608 cat $HISTFILE 6 1460643719 history
Further research indicates it was a bug and fixed sometime between the 4.2.x and 4.3.6 releases.
have I found a bug in BASH? [closed]
1,484,929,159,000
If I open konsole it opens non-stop more windows of konsole and the gui hangs because of overload, I tried apt purge konsole && apt install konsole, I tried to restart but to no avail. EDIT: @RuiFRibeiro Here is the contents of ~youruser/.config/konsolerc [Desktop Entry] 2 DefaultProfile=programmer.profile 3 4 [Favorite Profiles] 5 Favorites=programmer.profile 6 7 [KFileDialog Settings] 8 Recent Files[$e]=gott.png,file:$HOME/Pictures/trinity_the_matrix-11351.jpg,file:$HOME/Pictures/gott.png 9 Recent URLs[$e]=file:$HOME/Pictures/ 10 11 [MainWindow] 12 Height 1024=625 13 State=AAAA/wAAAAD9AAAAAAAABQAAAAOhAAAABAAAAAQAAAAIAAAACPwAAAAA 14 ToolBarsMovable=Disabled 15 Width 1280=845 16 Window-Maximized 1024x1280=true 17 18 [Notification Messages] 19 CloseAllTabs=true
As from your previous question, it seems konsole is calling itself. I would view the contents of ~youruser/.config/konsolerc and delete it to fix the problem.
If I open konsole it opens non-stop more windows of konsole
1,484,929,159,000
I am trying to install clamav in kali for removal of viruses in PC and USB.and I did: root@kali:~# apt-get install clamav Reading package lists... Done Building dependency tree Reading state information... Done Package clamav is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'clamav' has no installation candidate Any Help??
Check your repos: http://docs.kali.org/general-use/kali-linux-sources-list-repositories Update the local cache using: apt-get update (as Svetlin Tonchev said.) try again with: apt-get install clamav Optional: Search for it using: apt-cache search clamav
E: Package 'clamav' has no installation candidate?
1,484,929,159,000
I'd like to report a bug with my wireless card cutting out (seemingly) randomly, but I don't know what log files to include. I know a tail -F of a particular log file when the card cuts out would be invaluable. I just don't want to submit a bug report with no log files that will get lost in the shuffle. Right now, I'd attach the following: lspci | grep -i ethernet lshw -C network But I still need some sort of log file.
Normally when reporting the bugs the bug reporting system tells you what files to include, but if they don't then: /var/log/dmesg /var/log/daemon.log /var/log/messages lspci -vvv Possibly /var/log/syslog But you can simply install: Debian BTS and just follow the instructions. :)
What log files to include when reporting a bug report for my wlan card?
1,484,929,159,000
I find it rather hard to believe, but since I installed kernel: 4.15.0-58 several days ago, my sound on Linux Mint 19.2 Cinnamon stopped working, and no matter what I tried like: sudo apt reinstall alsa-base pulseaudio sudo alsa force-reload reboot(-s) alsamixer nothing brought me sound back, so I just installed an older kernel, namely the littlest bit older one: 4.15.0-55 now everything works like a charm. How can I debug this and figure out if the problem is a bug in the kernel itself or in some other audio-related package? I don't know what exact information you could require for this, so please ask in the comment section, I will answer eventually. I have just reproduced the problem by upgrading back to 4.15.0-58 from 4.15.0-55. Sound not working again. Plus, by switching to an older kernel proves sound is functional. Hardware: laptop Dell with service tag ==REMOVED== Bug report submitted, thank you Stephen for the link, where I have put all the info
Since downgrading your kernel restored your audio, this appears to be a regression in the kernel. Since you’re using an Ubuntu kernel, the best place to report the issue is there, but if you’re feeling brave, you could look at all the changes listed in the changelog to try to determine what could be causing the problem. If you do report it, you should mention the fact that downgrading from -58 to -55 fixes the problem for you; you should also include the contents of /proc/asound/cards in your working setup, and the module you’re using (lspci -v will tell you). Ideally, you could also try the packages released between -55 and -58; I suspect that the regression is in -56 since that includes a lot of changes.
Sound not working - bug in a specific version of kernel?
1,484,929,159,000
I need to install isc-dhcp-server on debian stretch, but the package cannot be installed correctly. apt-listbugs list isc-dhcp-server: #867362 - isc-dhcp-server: DHCP server does not start after upgrade to Stretch After installing the package # systemctl status isc-dhcp-server ● isc-dhcp-server.service - LSB: DHCP server Loaded: loaded (/etc/init.d/isc-dhcp-server; generated; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2017-10-05 11:47:47 UTC; 2min 52s ago Docs: man:systemd-sysv-generator(8) Process: 5416 ExecStart=/etc/init.d/isc-dhcp-server start (code=exited, status=1/FAILURE) oct. 05 11:47:45 stretch isc-dhcp-server[5416]: Launching both IPv4 and IPv6 servers (please configure INTERFACES in /etc/default/isc oct. 05 11:47:45 stretch dhcpd[5427]: irs_resconf_load failed: 59. oct. 05 11:47:45 stretch dhcpd[5427]: Unable to set resolver from resolv.conf; startup continuing but DDNS support may be affected oct. 05 11:47:45 stretch dhcpd[5427]: Wrote 0 leases to leases file. oct. 05 11:47:47 stretch isc-dhcp-server[5416]: Starting ISC DHCPv4 server: dhcpdcheck syslog for diagnostics. ... failed! oct. 05 11:47:47 stretch isc-dhcp-server[5416]: failed! oct. 05 11:47:47 stretch systemd[1]: isc-dhcp-server.service: Control process exited, code=exited status=1 oct. 05 11:47:47 stretch systemd[1]: Failed to start LSB: DHCP server. oct. 05 11:47:47 stretch systemd[1]: isc-dhcp-server.service: Unit entered failed state. oct. 05 11:47:47 stretch systemd[1]: isc-dhcp-server.service: Failed with result 'exit-code'. Installing the package from source (4.3.6 version) cannot solve the problem, Is there a way to install isc-dhcp-server on debian stretch? journalctl | grep isc-dhcp-server: oct. 05 11:45:09 stretch isc-dhcp-server[5288]: Launching both IPv4 and IPv6 servers (please configure INTERFACES in /etc/default/isc-dhcp-server if you only want one or the other). oct. 05 11:45:12 stretch isc-dhcp-server[5288]: Starting ISC DHCPv4 server: dhcpdcheck syslog for diagnostics. ... failed! oct. 05 11:45:12 stretch isc-dhcp-server[5288]: failed! oct. 05 11:45:12 stretch systemd[1]: isc-dhcp-server.service: Control process exited, code=exited status=1 oct. 05 11:45:11 stretch audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=isc-dhcp-server comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' oct. 05 11:45:12 stretch systemd[1]: isc-dhcp-server.service: Unit entered failed state. oct. 05 11:45:12 stretch systemd[1]: isc-dhcp-server.service: Failed with result 'exit-code'. oct. 05 11:47:45 stretch isc-dhcp-server[5416]: Launching both IPv4 and IPv6 servers (please configure INTERFACES in /etc/default/isc-dhcp-server if you only want one or the other). oct. 05 11:47:47 stretch isc-dhcp-server[5416]: Starting ISC DHCPv4 server: dhcpdcheck syslog for diagnostics. ... failed! oct. 05 11:47:47 stretch isc-dhcp-server[5416]: failed! oct. 05 11:47:47 stretch systemd[1]: isc-dhcp-server.service: Control process exited, code=exited status=1 oct. 05 11:47:47 stretch audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=isc-dhcp-server comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' oct. 05 11:47:47 stretch systemd[1]: isc-dhcp-server.service: Unit entered failed state. oct. 05 11:47:47 stretch systemd[1]: isc-dhcp-server.service: Failed with result 'exit-code'. systemctl list-units --state=failed : UNIT LOAD ACTIVE SUB DESCRIPTION ● dnsmasq.service loaded failed failed dnsmasq - A lightweight DHCP and caching DNS server ● isc-dhcp-server.service loaded failed failed LSB: DHCP server
I have been using the official ISC DHCPd Debian packages in Stretch without systemd, and been using it in Debian and upgrading that particular DHCP cluster since Debian 6 without many hitches. As for Stretch, I have been using isc-dhcp on it for over a year, I think, since I migrated earlier to take advantage of new versions, instead of assembling a deb of my own. (I had some reasons related to Windows DHCP clients) As normally those kind of services should be alone in a machine, I suggest making an exception and have a Debian server with the sysV init utilities. For that, you can do: sudo apt-get install sysvinit-core sysvinit
Is there a way to get a buggy package to work on debian stretch?
1,484,929,159,000
$ dpkg -l Gives you a list of all packages installed on your system. Now some bugs on the Debian BTS are tagged patch . Now is there a way to list all packages which are installed on your system for which patches are out there in the Debian BTS. Building, test and reporting as in feedback would make the packages better and in turn Debian better. Is there a way to do it ? Update - Bonus points if this can be done with a help of a CLI tool and not writing a script.
As a short script: for source in $(dpkg-query --show -f '${source:Package}\n' | sort -u); do bts select source:${source} tag:patch; done This uses dpkg-query to list the installed source packages, and bts (from the devscripts package) to list all bug numbers corresponding to an open bug with a patch filed against any of the source packages. It relies on packages' naming constraints to simplify parsing (there's no need to handle spaces or special characters). I don't know of any existing command-line tool which does this.
Is there a way to find patches which need testing from packages you have?
1,484,929,159,000
From /. found this worrisome post by Theodore Ts'o. Turns out ext4 has some journalling problems. How can I quickly find out version numbers of susceptible kernels for this and other bugs?
You can track (and submit) kernel bugs in the Kernel Bug Tracker .
What is a generic way of finding out whether the kernel has ext4 (or other) bugs?
1,484,929,159,000
How to change the software package an issue is filed under in the Debian bug tracker? The help page has these instructions: In case you need to write a message to the bug and also manipulate the bug's metadata, the commands can also be included in the mail to [email protected]. Do this by starting the mail with the commands prefixed by Control: . Another common way is to send a copy of the mail to [email protected] and start the mail with commands terminated by thanks. and then under Commands available: reassign bugnumber package [ version ] I tried: Control: reassign -1 kde-plasma-desktop [5:142] at the start of mail reply to the bug report with control at bugs.debian.org in the CC (I think I should have used nnn@ here; all below went to the same control email) Control: reassign 1019438 kde-plasma-desktop [ 5:142 ] Control: reassign 1019438 src:kde-plasma-desktop [ 5:142 ] (thanks in the line below) Control: reassign 1019438 kde-plasma-desktop (thanks in the line below) reassign 1019438 kde-plasma-desktop [ 5:142 ] (thanks in the line below) I could not find a full example anywhere so I'm asking here.
The square brackets indicate that the version is optional; they are not part of the command syntax. You should either send this to control@bugs.…: reassign 1019438 kde-plasma-desktop 5:142 thanks or this to 1019448@bugs.…: Control: reassign -1 kde-plasma-desktop 5:142
How do I reassign a bug to another package in the Debian bug tracker?
1,484,929,159,000
GNU bash, version 3.2.48 has this bug; already version 3.2.57 does not. Make a file with 8000 identical lines (say each line says 1). Run split -a3 -p "1" on it (-p is a BSD split option which makes it split on the given pattern. For a file with just one 1 per line, you can do the same thing with a standard split by running split -a3 -b 1). And execute find . -name xaaa -exec echo {} + And after the expected output, you get find: fts_read: Invalid argument output to cerr. The same error occurs when xaaa is replaced by any set of files, and echo by any other command I've tried. The length of the filename doesn't matter. The directory of the files also doesn't matter. After some creating files elsewhere, the error is gone. However, when xaaa is replaced by xaa* (or any other wildcard that includes multiple files, at least one of which is near the beginning of the directory listing), then the error occurs again. At that point, no single file causes the error to appear. Replacing + with ; does avoid the error, but is not acceptable for my script. This problem has been occurring intermittently in other situations in my script, but by reducing it I was able to come up with a simple way of replicating it. I want the script to stop if an error occurs, but this just makes it stop very often. Any idea how to get around this? (e.g. retrieve an error code and ignore just this specific error). Mac OS version 10.8.5. Darwin Kernel Version 12.5.0.
There's very little chance bash would be involved in there. bash's role in that is to parse that find . -name xaaa -exec echo {} + code and turn that into the execution of /path/to/find with find, ., -name, xaaa, -exec, echo, {}, +. Any bash version would do the same. Here, as the find: prefix in the error message, the error is by find, not bash, and specifically is about an error returned by the fts_read() function. macos' find is that of FreeBSD possibly with modifications by Apple. Thankfully, FreeBSD (if not macos) is FLOSS, and it's relatively easy to spot where that error is output in the code there. /* * find_execute -- * take a search plan and an array of search paths and executes the plan * over all FTSENT's returned for the given search paths. */ int find_execute(PLAN *plan, char *paths[]) [...] e = errno; finish_execplus(); if (e && (!ignore_readdir_race || e != ENOENT)) errc(1, e, "fts_read"); return (exitstatus); } At the very end of that find_execute function. Here we get: find: fts_read: Invalid argument Invalid argument being the error message that corresponds to the EINVAL errno. Here errno would have been set by fts_read as errno is set to 0 in the condition statement of the while loop just before calling fts_read and since we're out of the loop, that means fts_read just returned with NULL. If you look at the man page of fts_read, you'll find: The functions fts_read() and fts_children() may fail and set errno for any of the errors specified for the library functions chdir(2), malloc(3), opendir(3), readdir(3) and stat(2). If you follow those links to the other man pages, none of them is meant to return EINVAL. So it very much looks like a bug either in Apple's find or the fts library there. I'd suggest you raise that to Apple support as without access to the code, there's likely little more any of us could do here.
`find: fts_read: Invalid argument` when working with around 8000 files
1,484,929,159,000
Somehow v gets unset after calling f. $ zsh -xc 'v=1; f() { local v; v=2 true; }; f; typeset -p v' +zsh:1> v=1 +zsh:1> f +f:0> local v +f:0> v=2 +f:0> true +zsh:1> typeset -p v zsh:typeset:1: no such variable: v Here is the gist of my original reproduction report. I did email [email protected], but I have yet to receive any replies.
That was a bug indeed. You did right the right thing to report it. It was then fixed by this commit: https://sourceforge.net/p/zsh/code/ci/d946f22a4cd2eed0f3a67881cfa57c805703929c/ which will be included in the next version. And here's the explanation from zsh's maintainer: On Wed, 2019-08-14 at 10:37 +0100, Stephane Chazelas wrote: > 2019-08-08 20:38:05 +0430, Aryn Starr: > Now, that being said, as discussed on U&L it looks like a bug > indeed and a shorter reproducer is: > > $ zsh -xc 'v=1; f() { local v; v=2 true; }; f; typeset -p v' > +zsh:1> v=1 > +zsh:1> f > +f:0> local v > +f:0> v=2 +f:0> true > +zsh:1> typeset -p v > zsh:typeset:1: no such variable: v > > Most likely, that's the "v=2 true" (where "true" is a builtin) that ends up > unsetting the "v" from the global scope. Yes, the saved version of "v" that we restore after the builtin is missing the pointer back to the version of v in the enclosing scope. So it's not only not shown as set, it will leak memory. This simply preserves that pointer in the copy, but this assumes we've correctly blocked off the old parameter from being altered inside the function scope --- if we haven't that preserved old pointer is going to get us into trouble. However, if we haven't that's already a bug, so this shouldn't make things worse. pws [patch skipped]
zsh: Variable gets unset without reason
1,484,929,159,000
I have tftpd-hpa installed(Ubuntu 16.04 LTS). Recently, maybe after getting some updates (or uninstalling some application with vaste dependencies) the tftpd-hpa doesn't start anymore. The tftpd-hpa settings are: TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS=":69" TFTP_OPTIONS="--secure --create" The default root directory is /var/lib/tftpboot. Output of systemctl status tftpd-hpa.service and journalctl -xe commands: testlab@Amtek:~$ systemctl status tftpd-hpa.service ● tftpd-hpa.service - LSB: HPA's tftp server Loaded: loaded (/etc/init.d/tftpd-hpa; bad; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2017-04-18 01:47:32 EEST; 2min 8 Docs: man:systemd-sysv-generator(8) Process: 4764 ExecStart=/etc/init.d/tftpd-hpa start (code=exited, status=71) Apr 18 01:47:32 Amtek systemd[1]: Stopped LSB: HPA's tftp server. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Addre Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed stat Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit lines 1-14/14 (END) testlab@Amtek:~$ journalctl -xe -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Addre Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed stat Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authenticat Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atom lines 1276-1298/1298 (END) -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Addres Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, c Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed state Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit- Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authenticati Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomi lines 1276-1298/1298 (END) -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Address already in use Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, code=exited status=71 Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed state. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit-code'. Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authentication Agent for unix-process:4752:17293 Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=58 lines 1276-1298/1298 (END) -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Address already in use Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, code=exited status=71 Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed state. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit-code'. Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authentication Agent for unix-process:4752:172933 Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=585 lines 1276-1298/1298 (END) -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Address already in use Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, code=exited status=71 Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed state. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit-code'. Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authentication Agent for unix-process:4752:1729339 (system Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=585206 end=5 lines 1276-1298/1298 (END) -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Address already in use Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, code=exited status=71 Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed state. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit-code'. Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authentication Agent for unix-process:4752:1729339 (system Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=585206 end=58 lines 1276-1298/1298 (END) -- -- Unit tftpd-hpa.service has finished shutting down. Apr 18 01:47:32 Amtek systemd[1]: Starting LSB: HPA's tftp server... -- Subject: Unit tftpd-hpa.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has begun starting up. Apr 18 01:47:32 Amtek tftpd-hpa[4764]: * Starting HPA's tftpd in.tftpd Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Address already in use Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Control process exited, code=exited status=71 Apr 18 01:47:32 Amtek systemd[1]: Failed to start LSB: HPA's tftp server. -- Subject: Unit tftpd-hpa.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tftpd-hpa.service has failed. -- -- The result is failed. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Unit entered failed state. Apr 18 01:47:32 Amtek systemd[1]: tftpd-hpa.service: Failed with result 'exit-code'. Apr 18 01:47:32 Amtek polkitd(authority=local)[885]: Unregistered Authentication Agent for unix-process:4752:1729339 (system bus name :1. Apr 18 01:48:46 Amtek kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=585206 end=585207) time 9 lines 1276-1298/1298 (END) EDIT: below is sudo netstat -lnp | grep 69 output udp 0 0 0.0.0.0:69 0.0.0.0:* 851/inetd unix 2 [ ACC ] STREAM LISTENING 20940 1069/Xorg @/tmp/.X11-unix/X0 unix 2 [ ACC ] STREAM LISTENING 20941 1069/Xorg /tmp/.X11-unix/X0
Combining the two parts of information: Recently, after some updates received And the following error: Apr 18 01:47:32 Amtek in.tftpd[4777]: cannot bind to local IPv4 socket: Address already in use It seems like the problem is that the tftp port (69) is already in use, when you start the tftp server. This might be due to a new program which was installed/updated recently. Running the following command will help you figure out which process is using tftp port (69) on your machine: netstat -lnp | grep 69 netstat man -l, --listening Show only listening sockets. (These are omitted by default.) --numeric , -n Show numerical addresses instead of trying to determine symbolic host, port or user names. -p, --program Show the PID and name of the program to which each socket belongs. Note: The | grep 69 filter the result and will show only the lines which holds 69 in them Edit: As you added to your question the result of netstat -lnp shows that now inetd is running on udp/port 69, it is possible that it execute tftp client as part of inetd, hence you can't run tftpd-hpa on this port. Note: Please check if tftpd is running, and if it is good enough you can avoid using tftpd-hpa You have two options: Move tftp server to use different port This can be done by changing the following line which set tftp on port 69: TFTP_ADDRESS=":69" To use other port, e.g. 6900: TFTP_ADDRESS=":6900" Note: Using this solution will require that tftp-client will use the new port number Stop inetd from using udp/69 You can check if tftp configuration file exists in inetd folder, search for a file in a name like: /etc/xinetd.d/tftp or /etc/inetd.d/tftp
The tftpd-hpa doesn't start after update
1,484,929,159,000
I've come across an interesting issue this morning. We have noticed that GDM on RHEL 7 is allowing us to log-in with only the first 7 characters of password. We can enter anything or nothing from characters 8 onwards and we still get authenticated and logged in. This problem affects all RHEL 7 workstations on the network which use NIS. I did a quick search around for any potential existing bugs but have not been able to identify anything obvious. Any suggestions as to if this is a known issue or what may be the cause?
It's known: NIS can use DES (where the short passwords are seen) or other formats which support longer passwords. Further reading: AJ's Open Source, openSUSE and SUSE Ramblings Migration of NIS yppasswd hashes from crypt to md5 Are passwords on modern Unix/Linux systems still limited to 8 characters?
Red Hat 7 GDM + NIS Only Validates First 7 Characters of Password
1,484,929,159,000
I run Debian testing and occasionally notice hiccups in my system. I'd like to be able to see if someone has reported a related bug, but don't always know what package names to look for. Or, before running a dist-upgrade I might just want to see if there has been a spike in bug reports in the last few days just to see if something major has recently broken. Is there some way to just have the Debian Bug Report website spit back a list of bugs, sorted by submission date? Almost all the documentation for the Debian bugracker is about how to submit a bug report. Just submitting a bug report request that's blank other than ordered by age doesn't work, nor does a report request that's blank except for "recent bugs." Using * for the package name doesn't work.
That's indeed poorly documented. You can display the latest 100 entries using this link: https://bugs.debian.org/cgi-bin/pkgreport.cgi?newest=100 You can further narrow your search by selecting the desired distribution, ex. https://bugs.debian.org/cgi-bin/pkgreport.cgi?newest=100;dist=stable2 This will take you to a form where you can enter search crteria https://bugs.debian.org/cgi-bin/pkgreport.cgi?newest=;dist=stable
How to view recently submitted bugs in Debian bugtracker
1,484,929,159,000
I'm using Debian Wheezy 64bit and wine is only existing in a 32bit version. So I added multiarch support. But when I want to install winetricks it's dependencies are depends on wine | wine-unstable and not depends on wine | wine-unstable | wine:i386 So, aptitude suggests to install the dummy 64bit package or to not install winetricks... which doesn't make a lot of sense :) So, I wonder if I have to report a bug because winetricks' dependencies are wrong. For me it seems like that, but I would expect that bug report already to be written. How do I find out if a package has already been multiarchified? Will this issue be solved by only adding the | wine:i386 in the package informations? Shall I write a bug report in such cases?
The problem is not winetricks - multi-arch works in a different way as you think (I suggest (re-)reading the first sections of Debian's Multiarch-HOWTO). You actually need to install the wine:amd64-package instead of the wine:i386-package. The wheezy wine package depends on wine-bin | wine64-bin. The first is resolved by the wine-bin:i386 package as it has a Multi-Arch: foreign field in its control file. You can show its entries for example using apt-cache show wine-bin. In newer Debian system, the wine:amd64 package depends on wine64 | wine32. The latter is resolved by the wine32:i386 package.
Debian - How to find out if a package is multiarchified? Dependency changes as bug report
1,484,929,159,000
I am currently experiencing this bug, except that I'm using the Wheezy/testing netinstaller. Strangely, I used the netinst .iso a few months ago, and everything was fine. So it almost seems as if the same bug keeps creeping back into the system. That said, I have a very hard time understanding the format of the bug reporting system. What does 'archived' mean? Does that mean it was fixed? And what should I do to report my bug, given that it's with the most recent version of the installer?
Bug is archived. No further changes may be made. “Archived” means that the bug has been resolved in some way (fixed, or closed as invalid). An archived bug won't change at all. If you see no non-archived bug that corresponds to your issue, report a new one.
Trouble understanding the Debian bug reporting system
1,484,929,159,000
I'm using a live and persistent version of Ubuntu 13.04, created with LinuxLive USB Creator. The persistence mostly works, including for documents and apps, however the desktop background image and keyboard layout settings have to be configured manually each boot; the system prompts to install to a hard drive as well. How can I fix this?
Please look at this post: How do I set the default icon set and wallpaper for new users? Changing the wallpaper Open a terminal and type gconf-editor. Now, navigate to apps > gnome > background. Edit the key picture_filename and type in the path of the desired wallpaper image. Try rebooting and check if it works. I have verified that this works with Ubuntu 10.04 in VirtualBox. Keyboard layout Well, every Linux distribution has the /etc directory I believe :) Following is the content of /etc/default/keyboard in my machine. Try and/or adapt it: # KEYBOARD CONFIGURATION FILE # Consult the keyboard(5) manual page. XKBMODEL="pc105" XKBLAYOUT="us" XKBVARIANT="" XKBOPTIONS="" BACKSPACE="guess" Alternative approach If the above method for setting the wallpaper didn't work out, you can try this alternative -- but not-so-straightforward -- method. First, locate all the images in your system. Default wallpaper images could possibly in /usr/share/backgrounds. If you find it there, skip this step. find / -type f -name '*.png' -o -name '*.svg' -o -name '*.jpg' 2>/dev/null This would list a lot of images used by different applications. If you could manually screen them, then you should narrow down your choices. Suppose, you determined that the image desktop.png is set as the wallpaper. The next step would be to find all the file(s) that refer to the name of the image. find / -type f -exec grep 'desktop\.png' {} + 2>/dev/null If it shows a file, open that in some editor, replace the image location with the desire one, and reboot.
Ubuntu 13.04 with LinuxLive USB Creator and persistence forgets desktop background and keyboard layout
1,484,929,159,000
The installer won't start in graphical mode or text mode. Where to go to review/discuss/report possible bugs in the iso and/or installer? https://cdimage.debian.org/images/bookworm_di_rc1/amd64/iso-cd/ does not seem to say, nor does www debian org CD faq nor lists debian org debian-testing I have tried the three different graphics emulations in Virtualbox. I wish to test UEFI installs. Secure boot is disabled, and disabling I/O APIC gives a configuration error. PAE/NX and Nested VT-x/AMD-V are on, but it should be recorded in a bug that these will not work if so. The VM has 4G of RAM and 4 cores allocated to it. https://wiki.debian.org/Teams/DebianCd has a link to IRC, but I don't have a safe set-up for that, and am not comfortable with the security arrangements for it used naively. There are no bugs about the installer on https://bugs.debian.org/release-critical/debian/all.html Searching https://udd.debian.org/bugs/ for "bookworm" (radio button) and "installer bugs" (tickbox) found nothing. The same output occurs with and without the vga=788 option as far as I can see: Edit: The text installer shows part of a stack trace on pressing alt+f5, I got the rest on alt+f3 with dmesg | less
The RC1 installer announcement includes links to known errata and the installer page. In general, the recommendation is to submit an installation report; you can view current installation reports on the installation-reports pseudo-package. The relevant mailing list for your situation is probably debian-cd.
Debian Bookworm RC1 iso installer won't start on Virtualbox 7.0.6 on Ubuntu 22.04.2, how to find the status of this issue?
1,484,929,159,000
I have a laptop that I am installing RHEL 8.3 for Developer. I get an error: An unknown error has occurred. Reference is made to anaconda and I find some aspect on RHEL portal. https://access.redhat.com/solutions/5116361 However, this does not help me as, I am following the standard install procedure. I cannot see how to circumvent. I am going to a basic install. Any ideas.
Installed RHEL 8.2, no such issue. As GAD3R stated it appears to be a bug. Bug 1921159 Submitted
"An unknown error has occurred" is output when installing RHEL 8.3 on fresh system
1,484,929,159,000
I had the following script to alert me when someone sends me mail: cd /var/mail watch -g ls && cat end ./alert end would be a blank file; when I am going home, I would modify the end file from another shell, and the script would end due to the -g switch. I then realized that, rather than opening another shell, I could simply tell the script the time I'm going home, and it would modify the end file by itself at that time. When I do this and send watch into the background, the script terminates as expected. However, the shell starts all kinds of weird behaviour after the script has exited. Simplest buggy example: ( watch -g cat end ) & sleep 2 echo y >> end I thought I would use a different application to avoid this bug. However, both bash and konsole seem to have this issue on my system (Debian). I should add that running the above code directly from the command line does seem to work as expected. Only yields this weird behaviour when run from within a script.
Two processes writing to the screen at the same time can mess up. Try appending reset after echo y, it should reset the terminal back to normal. Maybe add a short sleep before it, too, so watch can't run after reset was run. Update: If you aren't interested in the output of watch, just redirect both it stdout and stderr to nowhere: ( watch -g cat end >& /dev/null ) & Then it won't clutter the screen and you won't need any reset.
Bug: `watch &` won't work within a script
1,504,304,251,000
I would like to set up wpa_supplicant and openvpn to run as non-root user, like the recommended setup for wireshark. I can't find any documentation for what +eip in this example means: sudo setcap cap_net_raw,cap_net_admin,cap_dac_override+eip /usr/bin/dumpcap
The way capabilities work in Linux is documented in man 7 capabilities. Processes' capabilities in the effective set are against which permission checks are done. File capabilities are used during an execv call (which happens when you want to run another program1) to calculate the new capability sets for the process. Files have two sets for capabilities, permitted and inheritable and effective bit. Processes have three capability sets: effective, permitted and inheritable. There is also a bounding set, which limits which capabilities may be added later to a process' inherited set and affects how capabilities are calculated during a call to execv. Capabilities can only be dropped from the bounding set, not added. Permissions checks for a process are checked against the process' effective set. A process can raise its capabilities from the permitted to the effective set (using capget and capset syscalls, the recommended APIs are respectively cap_get_proc and cap_set_proc). Inheritable and bounding sets and file capabilities come into play during an execv syscall. During execv, new effective and permitted sets are calculated and the inherited and bounding sets stay unchanged. The algorithm is described in the capabilities man page: P'(permitted) = (P(inheritable) & F(inheritable)) | (F(permitted) & cap_bset) P'(effective) = F(effective) ? P'(permitted) : 0 P'(inheritable) = P(inheritable) [i.e., unchanged] Where P is the old capability set, P' is the capability set after execv and F is the file capability set. If a capability is in both processes' inheritable set and the file's inheritable set (intersection/logical AND), it is added to the permitted set. The file's permitted set is added (union/logical OR) to it (if it is within the bounding set). If the effective bit in file capabilities is set, all permitted capabilities are raised to effective after execv. Capabilities in kernel are actually set for threads, but regarding file capabilities this distinction is usually relevant only if the process alters its own capabilities. In your example capabilities cap_net_raw , cap_net_admin and cap_dac_override are added to the inherited and permitted sets and the effective bit is set. When your binary is executed, the process will have those capabilities in the effective and permitted sets if they are not limited by a bounding set. [1] For the fork syscall, all the capabilities and the bounding set are copied from parent process. Changes in uid also have their own semantics for how capabilities are set in the effective and permitted sets.
How to set capabilities with setcap command?
1,504,304,251,000
An answer to the question "Allowing a regular user to listen to a port below 1024", specified giving an executable additional permissions using setcap such that the program could bind to ports smaller than 1024: setcap 'cap_net_bind_service=+ep' /path/to/program What is the correct way to undo these permissions?
To remove capabilities from a file use the -r flag setcap -r /path/to/program This will result in the program having no capabilities.
Unset `setcap` additional capabilities on executable
1,504,304,251,000
I am experimenting with capabilities, on Debian Gnu/Linux. I have copied /bin/ping to my current working directory. As expected it does not work, it was originally setuid root. I then give my ping the minimal capabilities (not root) by doing sudo /sbin/setcap cap_net_raw=ep ./ping, and my ping works, as expected. Then sudo /sbin/setcap -r ./ping to revoke that capability. It is now not working as expected. I now try to get ping working using capsh. capsh has no privileges, so I need to run it as root, but then drop root and thus all other privileges. I think I also need secure-keep-caps, this is not documented in capsh, but is in the capability manual. I got the bit numbers from /usr/include/linux/securebits.h. They seem correct, as the output of --print shows these bits to be correct. I have been fiddling for hours, so far I have this. sudo /sbin/capsh --keep=1 --secbits=0x10 --caps="cap_net_raw+epi" == --secbits=0x10 --user=${USER} --print -- -c "./ping localhost" However ping errors with ping: icmp open socket: Operation not permitted, this is what happens when it does not have the capability. Also the --print shows Current: =p cap_net_raw+i, this is not enough we need e. sudo /sbin/capsh --caps="cap_net_raw+epi" --print -- -c "./ping localhost" will set the capability to Current: = cap_net_raw+eip this is correct, but leaves us as root. Edit-1 I have now tried sudo /sbin/capsh --keep=1 --secbits=0x11 --caps=cap_net_raw+epi --print -- -c "touch zz; ./ping -c1 localhost;" This produces: touch: cannot touch `zz': Permission denied ping: icmp open socket: Operation not permitted The first error is expected as secure-noroot: yes But the second is not Current: = cap_net_raw+eip Edit-2 If I put == before the --print, it now shows Current: = cap_net_raw+i, so that explains the previous error, but not why we are loosing capability when switching out of root, I though that secure-keep-caps should fix that. Edit-3 From what I can see, I am loosing Effective (e), and Permitted (p), when exec is called. This is expected, but I thought that secure-keep-caps, should stop them being lost. Am I missing something. Edit-4 I have been doing more research, and reading the manual again. It seems that normally e and p capabilities are lost when: you switch from user root ( or apply secure-noroot, thus making root a normal user), this can be overridden with secure-keep-caps; when you call exec, as far as I can tell this is an invariant. As far as I can tell, it is working according to the manual. As far as I can tell there is no way to do anything useful with capsh. As far as I can tell, to use capabilities you need to: use file capabilities or have a capabilities aware program, that does not use exec. Therefore no privileged wrapper. So now my question is what am I missing, what is capsh for. Edit-5 I have added an answer re ambient capabilities. Maybe capsh can also be used with inherited capabilities, but to be useful these would need to be set on the executable file. I can not see how capsh can do anything useful without ambient capabilities, or to allow inherited capabilities. Versions: capsh from package libcap2-bin version 1:2.22-1.2 before edit-3 I grabbed the latest capsh from git://git.debian.org/collab-maint/libcap2.git and started using it. uname -a Linux richard-laptop 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2 x86_64 GNU/Linux User-land is 32bit.
There may be a bug/feature in the kernel. There has been some discussion: https://bugzilla.altlinux.org/show_bug.cgi?id=16694 http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-03/5224.html I have no idea, if anything has been done, to fix it. Don't get me wrong - the current behaviour is secure. But it's so secure that it gets in the way of things which should appear to work. Edit: According to http://man7.org/linux/man-pages/man7/capabilities.7.html there is a new capability set Ambient (since Linux 4.3). It looks like this will allow what is needed.
How do I use capsh: I am trying to run an unprivileged ping, with minimal capabilities
1,504,304,251,000
I am trying to create a systemd service for a web server process that has to bind to port 80 and 443. I found some examples setting AmbientCapabilities=CAP_NET_BIND_SERVICE and setting both AmbientCapabilities and CapabilityBoundingSet. From the doc, it is not clear. Systemd doc: link. Linux man doc: link Should I set both or just AmbientCapabilities?
They're complete opposites: AmbientCapabilities grants capabilities that the process normally wouldn't have started with. CapabilityBoundingSet limits capabilities the process is allowed to obtain. It doesn't grant any. For your task, it is enough to set AmbientCapabilities to grant the privileges – the bounding set already allows everything by default, so there's no need to change the it. Instead, the latter is meant to be a security hardening feature. Even if the service literally runs as root (uid 0) – or calls a setuid-root program like 'su' or 'sudo' – it can never gain any privileges that aren't in its bounding set. But you can (and perhaps should) set both if you're sure your service won't be directly running anything that needs higher privileges.
What is the difference between AmbientCapabilities and CapabilityBoundingSet?
1,504,304,251,000
I want to give node.js the ability to listen on port 80, and shutdown the computer. Initially I tried these two commands in sequence: setcap cap_net_bind_service=+ep /usr/bin/nodejs setcap cap_sys_boot=+ep /usr/bin/nodejs Then my app was failing to bind to port 80. I checked with getcap: # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_sys_boot+ep If I run setcap again for cap_net_bind_service: # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_net_bind_service+ep I don't see anything in the man page http://linux.die.net/man/8/setcap about setting multiple capabilities, and try some things in desperation: # setcap cap_net_bind_service=+ep /usr/bin/nodejs cap_sys_boot=+ep /usr/bin/nodejs # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_sys_boot+ep # setcap cap_net_bind_service=+ep cap_sys_boot=+ep /usr/bin/nodejs Failed to set capabilities on file `cap_sys_boot=+ep' (No such file or directory) How do I set multiple capabilities?­­­­­­­
And one last desperate syntax guess pays off: # setcap cap_net_bind_service,cap_sys_boot=+ep /usr/bin/nodejs # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_net_bind_service,cap_sys_boot+ep
'setcap' overwrites last capability. How do I set multiple capabilities?
1,504,304,251,000
As far as I know, ping needs to create a raw socket (which needs either root access or cap_net_raw capabilities). From my understanding the trend these last years has been to remove setuid binaries and replaced them with capabilities. However when I look at the ping binary on my Fedora 32, it doesn't look to have any: $ ls -la $(which ping) -rwxr-xr-x. 1 root root 82960 May 18 10:26 /usr/bin/ping $ sudo getcap -v $(which ping) /usr/bin/ping $ Does ping need to open raw socket on fedora? Or is there another way to give it the permission to open a raw socket?
I think https://fedoraproject.org/wiki/Changes/EnableSysctlPingGroupRange answers your question: Enable the Linux kernel's net.ipv4.ping_group_range parameter to cover all groups. This will let all users on the operating system create ICMP Echo sockets without using setuid binaries, or having the CAP_NET_ADMIN and CAP_NET_RAW file capabilities. Cross-reference detail Targeted release: Fedora 31 Last updated: 2019-08-13 Tracker bug: #1740809 Release notes tracker: #376 The sysctl documentation writes, ping_group_range - 2 INTEGERS Restrict ICMP_PROTO datagram sockets to users in the group range. The default is "1 0", meaning, that nobody (not even root) may create ping sockets. Setting it to "100 100" would grant permissions to the single group. "0 4294967295" would enable it for the world, "100 4294967295" would enable it for the users, but not daemons. An older code example demonstrates the use of this feature, and in particular shows that a socket is created with the IPPROTO_ICMP flag to identify that it will be used for raw ICMP int sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP)
How does ping work on Fedora without setuid and capabilities?
1,504,304,251,000
Using setcap to give additional permissions to a binary should write the new permission somewhere, on storage or in memory, where is it stored ? Using lsof as is doesn't work because the process disappear too quickly.
Extended permissions such as access control lists set by setfacl and capability flags set by setcap are stored in the same place as traditional permissions and set[ug]id flags set by chmod: in the file's inode. (They may actually be stored in a separate block on the disk, because an inode has a fixed size which has room for the traditional permission bits but not for the potentially unbounded extended permissions. But that only matters in rare cases, such as having to care that setcap could run out of disk space. But even chmod could run out of disk space on a system that uses deduplication!) GNU ls doesn't display a file's setcap attributes. You can display them with getcap. You can list all the extended attributes with getfattr -d -m -; the setcap attribute is called security.capability and it is encoded in a binary format which getcap decodes for you.
When using setcap, where is the permission stored?
1,504,304,251,000
I want to run a command on Linux in a way that it cannot create or open any files to write. It should still be able to read files as normal (so an empty chroot is not an option), and still be able to write to files already open (especially stdout). Bonus points if writing files to certain directories (i.e. the current directory) is still possible. I’m looking for a solution that is process-local, i.e. does not involve configuring things like AppArmor or SELinux for the whole system, nor root privileges. It may involve installing their kernel modules, though. I was looking at capabilities and these would have been nice and easy, if there were a capability for creating files. ulimit is another approach that would be convenient, if it covered this use case.
It seems that the right tool for this job is fseccomp Based on sync-ignoringf code by Bastian Blank, I came up with this relatively small file that causes all its children to not be able to open a file for writing: /* * Copyright (C) 2013 Joachim Breitner <[email protected]> * * Based on code Copyright (C) 2013 Bastian Blank <[email protected]> * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, this * list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #define _GNU_SOURCE 1 #include <errno.h> #include <fcntl.h> #include <seccomp.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #define filter_rule_add(action, syscall, count, ...) \ if (seccomp_rule_add(filter, action, syscall, count, ##__VA_ARGS__)) abort(); static int filter_init(void) { scmp_filter_ctx filter; if (!(filter = seccomp_init(SCMP_ACT_ALLOW))) abort(); if (seccomp_attr_set(filter, SCMP_FLTATR_CTL_NNP, 1)) abort(); filter_rule_add(SCMP_ACT_ERRNO(EACCES), SCMP_SYS(open), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, O_WRONLY, O_WRONLY)); filter_rule_add(SCMP_ACT_ERRNO(EACCES), SCMP_SYS(open), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, O_RDWR, O_RDWR)); return seccomp_load(filter); } int main(__attribute__((unused)) int argc, char *argv[]) { if (argc <= 1) { fprintf(stderr, "usage: %s COMMAND [ARG]...\n", argv[0]); return 2; } if (filter_init()) { fprintf(stderr, "%s: can't initialize seccomp filter\n", argv[0]); return 1; } execvp(argv[1], &argv[1]); if (errno == ENOENT) { fprintf(stderr, "%s: command not found: %s\n", argv[0], argv[1]); return 127; } fprintf(stderr, "%s: failed to execute: %s: %s\n", argv[0], argv[1], strerror(errno)); return 1; } Here you can see that it is still possible to read files: [jojo@kirk:1] Wed, der 06.03.2013 um 12:58 Uhr Keep Smiling :-) > ls test ls: cannot access test: No such file or directory > echo foo > test bash: test: Permission denied > ls test ls: cannot access test: No such file or directory > touch test touch: cannot touch 'test': Permission denied > head -n 1 no-writes.c # reading still works /* It does not prevent deleting files, or moving them, or other file operations besides opening, but that could be added. A tool that enables this without having to write C code is syscall_limiter.
How to prevent a process from writing files
1,504,304,251,000
root@macine:~# getcap ./some_bin ./some_bin =ep What does "ep" mean? What are the capabilities of this binary?
# getcap ./some_bin ./some_bin =ep That binary has ALL the capabilites permitted (p) and effective (e) from the start. In the textual representation of capabilities, a leading = is equivalent to all=. From the cap_to_text(3) manpage: In the case that the leading operator is =, and no list of capabilities is provided, the action-list is assumed to refer to all capabilities. For example, the following three clauses are equivalent to each other (and indicate a completely empty capability set): all=; =; cap_chown,<every-other-capability>=. Such a binary can do whatever it pleases, limited only by the capability bounding set, which on a typical desktop system includes everything (otherwise setuid binaries like su wouldn't work as expected). Notice that this is just a "gotcha" of the textual representation used by libcap: in the security.capability extended attribute of the file for which getcap will print /file/path =ep, all the meaningful bits are effectively on; for an empty security.capability, /file/path = (with the = not followed by anything) will be printed instead. If someone is still not convinced, here is a small experiment: # cp /bin/ping /tmp/ping # will wipe setuid bits and extented attributes # su user -c '/tmp/ping localhost' ping: socket: Operation not permitted # setcap =ep /tmp/ping # su user -c '/tmp/ping localhost' # will work because of cap_net_raw PING localhost(localhost (::1)) 56 data bytes 64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.073 ms ^C # setcap = /tmp/ping # su user -c '/tmp/ping localhost' ping: socket: Operation not permitted Notice that an empty file capability is also different from a removed capability (capset -r /file/path), an empty file capability will block the Ambient set from being inherited when the file executes. A subtlety of the =ep file capability is that if the bounding set is not a full one, then the kernel will prevent a program with =ep on it from executing (as described in the "Safety checking for capability-dumb binaries" section of the capabilities(7) manpage).
What does the "ep" capability mean?
1,504,304,251,000
If I want to set a capability (capabilities(7)), such as CAP_NET_BIND_SERVICE, on an executable file and that file is a script, do I have to set the capability (setcap(8)) on the interpreter starting that script or is it sufficient to set it on the script file itself? Note: the question concerns Scientific Linux 6.1 in particular, but I think it can be answered generically.
Setting capability on the script will not be effective. It's the similar situation as not working setuid bit on script. Similar as in the latter case it's the implementation of how execve handles shebang and the security reasoning behind it (for details see: Allow setuid on shell scripts). I think you have these options set the capabilities on interpreter itself (actually rather a copy of it) you have a problem here that anybody who is able to execute it will run with those elevated capabilities (be able to execute some arbitrary script or start it interactively) write a wrapper executable which will have a hardcoded logic to execute your script, and set desired capabilities on this executable make sure that nobody is able to modify nor remove / replace the script still by doing chroot one might missuse such wrapper In both cases you would have to make sure capabilities set will survive execve by setting inheritable flag. You might also use pam_cap distributed with libcap usually, to actually activate desired capabilities by configuration only for selected users. And in general you want to make sure nobody is able to modify behavior of your interpreter by changing environment eg. PYTHON_PATH or something similar.
Capabilities for a script on Linux
1,504,304,251,000
I'm starting a webserver as non-root using a systemd unit file. I am getting listen tcp :80: bind: permission denied even though I already ran setcap cap_net_bind_service=+ep on the executable. In an example unit file on the internet I found CapabilityBoundingSet=CAP_NET_BIND_SERVICE AmbientCapabilities=CAP_NET_BIND_SERVICE to be used in the unit file. So I tried that out, and suddenly the application can bind port 80. What does that tell me? setcap is old/deprecated/ignored? Only by systemd or by Linux in general?
It's correct to say that in general systemd will not work with file capabilities managed with setcap and will require you to configure them as part of the service unit instead. So it's not like setcap is completely deprecated... (There might be valid uses for it outside of services launched by systemd.) But it doesn't really work for systemd services, at least. In fact, file capabilities (set by setcap) were always dubious and questionable, from the start... They require the use of "inheritable" capabilities, which were somewhat poorly conceived and had many shortcomings... The kernel feature of "ambient" capabilities was introduced to solve many of these issues and it's what newer systems are adopting (systemd included here, as you can see, you're setting AmbientCapabilities= for your service to manage to be able to bind to low ports.) The topic of capabilities is fairly complex... For a perhaps gentler introduction to this issue, you might want to check "Inheriting capabilities" at LWN. For the full gory details (including some algebraic notation on the capability sets), refer to the capabilities(7) man page.
Is setcap deprecated?
1,504,304,251,000
I'm trying to run an app in a docker container.The app requires root privileges to run. sudo docker run --restart always --network host --cap-add NET_ADMIN -d -p 53:53/udp my-image My question is: What are the risks when adding the NET_ADMIN capability together with the --network host option. If an attacker can somehow obtain some code execution from my app, will he have unlimited power since I'm running it as root or will he only have access to the networking part of the kernel? If so, what would be his attack surface (in other words, can he gain root on my host OS with only the NET_ADMIN capability set?)
Q1. Can an attacker gain root on my host OS using only the NET_ADMIN capability? Yes (in some cases). CAP_NET_ADMIN lets you use the SIOCETHTOOL ioctl() on any network device inside the namespace. This includes commands like ETHTOOL_FLASHDEV, i.e. ethtool -f. And that's the game. There is a little more explanation in the quote below. SIOCETHTOOL is allowed inside any network namespace, since commit 5e1fccc0bfac, "net: Allow userns root control of the core of the network stack". Before then, it was only possible for CAP_NET_ADMIN in the "root" network namespace. This is interesting because of the security considerations that were pointed out at the time. I had a look at the code in kernel version 5.0, and I believe the following comments still apply: Re: [PATCH net-next 09/17] net: Allow userns root control of the core of the network stack. For the same reason you had better be very selective about which ethtool commands are allowed based on per-user_ns CAP_NET_ADMIN. Consider for a start: ETHTOOL_SEEPROM => brick the NIC ETHTOOL_FLASHDEV => brick the NIC; own the system if it's not using an IOMMU These are prevented by not having access to real hardware by default. A physical network interface must be moved into a network namespace for you to have access to it. Yes, I realise that. The question is whether you would expect anything in a container to be able to do those things, even with a physical net device assigned to it. Actually we have the same issue without considering containers - should CAP_NET_ADMIN really give you low-level control over hardware just because it's networking hardware? I think some of these ethtool operations, and access to non-standard MDIO registers, should perhaps require an additional capability (CAP_SYS_ADMIN or CAP_SYS_RAWIO?). I guess the lockdown feature has a similar issue. I didn't notice the lockdown patches in the results while I was searching. I suppose the solution for the lockdown feature would be some kind of digital signature, similar to how lockdown only allows signed kernel modules. Q2. If they somehow obtain some code execution from my app, will they have unlimited power? I'm splitting this out as a narrower case, specific to your command - sudo docker run --restart always --network host --cap-add NET_ADMIN -d -p 53:53/udp my-image As well as capabilities, the docker command should also impose seccomp restrictions. It might also impose LSM-based restrictions, if they are available on your system (SELinux or AppArmor). However, neither of these seem to apply to SIOCETHTOOL: I think seccomp-bpf could be used to block SIOCETHTOOL. However the default seccomp configuration for docker does not try to filter any ioctl() calls. And I did not notice any LSM hooks in the kernel functions I looked at. I think Ben Hutchings made a good point; the ideal solution would be to restrict this to CAP_SYS_RAWIO. But if you change something like that and too many people "notice" - i.e. it breaks their setup - then you get Angry Linus shouting at you :-P. (Especially if you're working on this because of "secure boot"). Then the change gets reverted, and you get to work out what the least ugly hack is. I.e. the kernel might be forced to maintain backwards compatibility, and allow processes which have CAP_NET_ADMIN in the root namespace. In that case, you would still need seccomp-bpf to protect your docker command. I am not sure that it would be worth trying to change the kernel in this case, as it would only protect (some) containers. And maybe container runtimes like docker could be fixed to block SIOCETHTOOL by default. That might be a workable default for "OS containers" like LXC / systemd-nspawn as well.
Docker running an app with NET_ADMIN capability: involved risks
1,504,304,251,000
I am trying to clear my filesystem cache from inside a docker container, like so: docker run --rm ubuntu:vivid sh -c "/bin/echo 3 > /proc/sys/vm/drop_caches" If I run this command I get sh: 1: cannot create /proc/sys/vm/drop_caches: Read-only file system which is expected, as I cannot write to /proc from inside the container. Now when I call docker run --rm --privileged ubuntu:vivid sh -c "/bin/echo 3 > /proc/sys/vm/drop_caches" it works, which also makes sense to me, as the --privileged container can do (almost) anything on the host. My question is: how do I find out, which Linux capability I need to set in the command docker run --rm --cap-add=??? ubuntu:vivid sh -c "/bin/echo 3 > /proc/sys/vm/drop_caches" in order to make this work without having to set --privileged?
The proc filesystem doesn't support capabilities, ACL, or even changing basic permissions with chmod. Unix permissions determine whether the calling process gets access. Thus only root can write that file. With user namespaces, that's the global root (the one in the original namespace); root in a container doesn't get to change sysctl settings. As far as I know, the only solution to change a sysctl setting from inside a non-privileged namespace is to arrange a communication channel with the outside (e.g. a socket or pipe), and have the listening process run as root outside the container.
Which Linux capability do I need in order to write to /proc/sys/vm/drop_caches?
1,504,304,251,000
I'm trying to understand how Linux capabilities are passed to a process that has been exec()'d by another one. From what I've read, in order for a capability to be kept after exec, it must be in the inheritable set. What I am not sure of, though, is how that set gets populated. My goal is to be able to run a program as a regular user that would normally require root. The capability it needs is cap_dac_override so it can read a private file. I do not want to give it any other capabilities. Here's my wrapper: #include <unistd.h> int main(int argc, char *argv[]) { return execl("/usr/bin/net", "net", "ads", "dns", "register", "-P", NULL); } This works when I set the setuid permission on the resulting executable: ~ $ sudo chown root: ./registerdns ~ $ sudo chmod u+s ./registerdns ~ $ ./registerdns Successfully registered hostname with DNS I would like to use capabilities instead of setuid, though. I've tried setting the cap_dac_override capability on the wrapper: ~ $ sudo setcap cap_dac_override=eip ./registerdns ~ $ ./registerdns Failed to open /var/lib/samba/private/secrets.tdb ERROR: Unable to open secrets database I've also tried setting the inheritable flag on the cap_dac_override capability for the net executable itself: ~ $ sudo setcap cap_dac_override=eip ./registerdns ~ $ sudo setcap cap_dac_override=i /usr/bin/net ~ $ ./registerdns Failed to open /var/lib/samba/private/secrets.tdb ERROR: Unable to open secrets database I need to use the wrapper to ensure that the capability is only available when using that exact set of arguments; the net program does several other things that could be dangerous to give users too broad of permissions on it. I'm obviously misunderstanding how the inheritance works. I can't seem to figure out how to set up the wrapper to pass its capabilities along to the replacement process so it can use them. I've read the man page, and countless other documents on how it should work, and I thought I was doing what it describes.
It turns out that setting +i on the wrapper does not add the capability to the CAP_INHERITABLE set for the wrapper process, thus it is not passed through exec. I therefore had to manually add CAP_DAC_OVERRIDE to CAP_INHERITABLE before calling execl: #include <sys/capability.h> #include <stdio.h> #include <unistd.h> int main(int argc, char **argv[]) { cap_t caps = cap_get_proc(); printf("Capabilities: %s\n", cap_to_text(caps, NULL)); cap_value_t newcaps[1] = { CAP_DAC_OVERRIDE, }; cap_set_flag(caps, CAP_INHERITABLE, 1, newcaps, CAP_SET); cap_set_proc(caps); printf("Capabilities: %s\n", cap_to_text(caps, NULL)); cap_free(caps); return execl("/usr/bin/net", "net", "ads", "dns", "register", "-P", NULL); } In addition, I had to add cap_dac_override to the permitted file capabilities set on /usr/bin/net and set the effective bit: ~ $ sudo setcap cap_dac_override=p ./registerdns ~ $ sudo setcap cap_dac_override=ei /usr/bin/net ~ $ ./registerdns Capabilities = cap_dac_override+p Capabilities = cap_dac_override+ip Successfully registered hostname with DNS I think I now fully understand what's happening: The wrapper needs CAP_DAC_OVERRIDE in its permitted set so it can add it to its inheritable set. The wrapper's process inheritable set is different than its file inheritable set, so setting +i on the file is useless; the wrapper must explicitly add CAP_DAC_OVERRIDE to CAP_INHERITABLE using cap_set_flag/cap_set_proc. The net file needs to have CAP_DAC_OVERRIDE in its inheritable set so that it can in fact inherit the capability from the wrapper into its CAP_PERMITTED set. It also needs the effective bit to be set so that it will be automatically promoted to CAP_EFFECTIVE.
Passing capabilities through exec
1,504,304,251,000
I have a service that I start using systemd. The service user and group are changed to a non-privileged user. [Service] ... User=regular_user Group=regular_user ... At some point the service needs to start another process, which is expected to become root. That other process has its 's' bit set and it uses setuid() to become root. The process works just fine if I start it. However, somehow, when the service tries to start it, the setuid() function returns with an error: Operation not permitted. I've seen some options about capabilities, but I have no clue whether those are what needs to be used to keep the setuid() capability working in my service. I tried a few things and none helped so far. For example, I tried that: AmbientCapabilities=CAP_SETGID CAP_SETUID SecureBits=keep-caps And although the process does not seem to generate an error anymore, it still does not do what it is supposed to do (again, if I run that process in my console, it works just fine!)
I actually found the NoNewPrivileges= option that allows my process children to use the setuid(). From what they are saying, it is certainly not an option one should lightly choose to use. However, the default is: do not allow the setuid() feature. (what they mean by «elevate privileges».) What worked for me was to do this: NoNewPrivileges=false Note that the documentation does not clearly say that the default for this option is true on Ubuntu 16.04. This may vary depending on the OS.
How do I run a process that wants to become root from a systemd service which is a regular user?
1,504,304,251,000
Thinking about a future web server setup, it struck me that for some reason web servers usually start as root and then drop certain rights (setuid) for the worker processes. In addition there is often chroot involved, which isn't exactly meant as a security measure. What I was wondering, why can web servers (I have administrated everything from Apache, lighttpd to nginx) not use the capability system (capabilities(7)), such as CAP_NET_BIND_SERVICE, on Linux and simply start as non-root user? ... this way still listening on a privileged port below 1024. Or better, I think most of them could, but why isn't that common practice? Why not ... use setcap(8) with CAP_NET_BIND_SERVICE on the binary being run? set up the log folders to allow the (non-root) user to write there ..., if you felt like chroot helps at all, use chroot or lxc to "jail" the web server? There is nothing other than (worker) child process may kill parent that I could come up with that would make this less beneficial than starting outright as root. So why are they traditionally being started as root when afterwards everything is done to get rid of implied security issues that come with it?
Although POSIX has a standard for capabilities which I think includes CAP_NET_BIND_SERVICE, these are not required for conformance and may in some ways be incompatible with the implementation on, e.g., linux. Since webservers like apache are not written for only one platform, using root privileges is the most portable method. I suppose it could do this specifically on linux and BSD (or wherever support is detected), but this would mean the behaviour would vary from platform to platform, etc. It seems to me you could configure your system so that any web server could be used this way; there are some (perhaps clumsy) suggestions about this WRT apache here: NonRootPortBinding. So why are they traditionally being started as root when afterwards everything is done to get rid of implied security issues that come with it? They're started as root because they usually need to access a privileged port, and traditionally this was the only way to do it. The reason they downgrade afterward is because they do not need privileges subsequently, and to limit the damage potential introduced by the myriad of third party add-on software commonly used by the server. This is not unreasonable, since the privileged activity is very limited, and by convention many other system daemons run root continuously, including other inet daemons (e.g., sshd). Keep in mind that if the server were packaged so that it could be run as an unprivileged user with CAP_NET_BIND_SERVICE, this would allow any non-privileged user to start HTTP(S) service, which is perhaps a greater risk.
Why are web servers traditionally started as superuser?
1,504,304,251,000
I'm trying to write a tun/tap program in Rust. Since I don't want it to run as root I've added CAP_NET_ADMIN to the binary's capabilities: $sudo setcap cap_net_admin=eip target/release/tunnel $getcap target/release/tunnel target/release/tunnel = cap_net_admin+eip However, this is not working. Everything I've read says that this is the only capability required to create tuns, but the program gets an EPERM on the ioctl. In strace, I see this error: openat(AT_FDCWD, "/dev/net/tun", O_RDWR|O_CLOEXEC) = 3 fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC) ioctl(3, TUNSETIFF, 0x7ffcdac7c7c0) = -1 EPERM (Operation not permitted) I've verified that the binary runs successfully with full root permissions, but I don't want this to require sudo to run. Why is CAP_NET_ADMIN not sufficient here? For reference, I'm on Linux version 4.15.0-45 there are only a few ways I see that this ioctl can return EPERM in the kernel (https://elixir.bootlin.com/linux/v4.15/source/drivers/net/tun.c#L2194) and at least one of them seems to be satisfied. I'm not sure how to probe the others: if (!capable(CAP_NET_ADMIN)) return -EPERM; ... if (tun_not_capable(tun)) return -EPERM; ... if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) return -EPERM;
I experienced the same issue when writing a Rust program that spawns a tunctl process for creating and managing TUN/TAP interfaces. For instance: let tunctl_status = Command::new("tunctl") .args(&["-u", "user", "-t", "tap0"]) .stdout(Stdio::null()) .status()?; failed with: $ ./target/debug/nio TUNSETIFF: Operation not permitted tunctl failed to create tap network device. even though the NET_ADMIN file capability was set: $ sudo setcap cap_net_admin=+ep ./target/debug/nio $ getcap ./target/debug/nio ./target/debug/nio cap_net_admin=ep The manual states: Because inheritable capabilities are not generally preserved across execve(2) when running as a non-root user, applications that wish to run helper programs with elevated capabilities should consider using ambient capabilities, described below. To cover the case of execve() system calls, I used ambient capabilities. Ambient (since Linux 4.3) This is a set of capabilities that are preserved across an execve(2) of a program that is not privileged. The ambient capability set obeys the invariant that no capability can ever be ambient if it is not both permitted and inheritable. Example solution: For convenience, I use the caps-rs library. // Check if `NET_ADMIN` is in permitted set. let perm_net_admin = caps::has_cap(None, CapSet::Permitted, Capability::CAP_NET_ADMIN); match perm_net_admin { Ok(is_in_perm) => { if !is_in_perm { eprintln!("Error: The capability 'NET_ADMIN' is not in the permitted set!"); std::process::exit(1) } } Err(e) => { eprintln!("Error: {:?}", e); std::process::exit(1) } } // Note: The ambient capability set obeys the invariant that no capability can ever be ambient if it is not both permitted and inheritable. caps::raise( None, caps::CapSet::Inheritable, caps::Capability::CAP_NET_ADMIN, ) .unwrap_or_else(fail_due_to_caps_err); caps::raise(None, caps::CapSet::Ambient, caps::Capability::CAP_NET_ADMIN) .unwrap_or_else(fail_due_to_caps_err); Finally, setting the NET_ADMIN file capability suffices: $ sudo setcap cap_net_admin=+ep ./target/debug/nio
Why is CAP_NET_ADMIN insufficient permissions for ioctl(TUNSETIFF)?
1,504,304,251,000
I learned from here that there's 2 ways to control privileged activities: setuid and capability. But when I'm playing around with ping on my machine, it seems that it can bypass these 2 mechanism. First, confirm that on my machine /usr/bin/ping has cap_net_raw capability and it use SOCK_RAW: $ ll /usr/bin/ping -rwxr-xr-x 1 root root 72K Jan 31 2020 /usr/bin/ping $ getcap /usr/bin/ping /usr/bin/ping = cap_net_raw+ep $ strace -e socket ping <some-ip> socket(AF_NETLINK, SOCK_RAW|SOCK_CLOEXEC, NETLINK_ROUTE) = 5 Copying the binary will drop the capability but it still works: $ cp /usr/bin/ping ~ $ ll ~/ping -rwxr-xr-x 1 user user 72K Nov 4 16:54 /home/user/ping $ getcap ~/ping [empty result] $ ~/ping <some-ip> [it works] I'm using Ubuntu 20.04 and 5.4.0-52-generic.
On a recent Linux system, ping doesn't need any privileges for its most basic operation, which is to send ICMP echo request messages and receive responding echo reply messages. Ubuntu 20.04 has two implementations of ping. The default one, from iputils-ping, is installed setcap CAP_NET_RAW but works for ICMP echo without privileges. The one from inetutils-ping is installed setuid root but also works for ICMP echo without privileges. Both use an ICMP socket, which is permitted without privileges: socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP) = 3 I can't reproduce the use of a netlink socket for a basic ping on either of these implementations, with or without privileges.
Why ping works without capability and setuid [duplicate]
1,504,304,251,000
Since RPM 4.7, there has been the ability to specify that a file in an RPM package should be installed with capabilities set (via %caps). Is there a similar feature for Debian packages?
Sadly, no. There isn't a way to make dpkg use file capabilities, and apparently nobody has ever asked, though the library itself is available. I skimmed through the Debian Policy Manual, and there isn't a single entry that reference this feature. That said, you can use dh_override_install (if you use debhelper), pre/post maintainer scripts or modifying the debian/rules file to reproduce this behavior, but I don't see any obviously easy way to implement it.
Can capabilities be specified in Debian packages?
1,504,304,251,000
When I modify a file, the file capabilities I had set earlier are lost. Is this the expected behavior? I first set a file capability: $ setcap CAP_NET_RAW+ep ./test.txt $ getcap ./test.txt ./test.txt = cap_net_raw+ep As expected I found the file capability is set. Then I modify the file. $ echo hello >> ./test.txt Now when I check the file capabilities, no capabilities are found. $ getcap ./test.txt
Yes it is expected behaviour. I don't have a document that says it but you can see in this patch from 2007 When a file with posix capabilities is overwritten, the file capabilities, like a setuid bit, should be removed. This patch introduces security_inode_killpriv(). This is currently only defined for capability, and is called when an inode is changed to inform the security module that it may want to clear out any privilege attached to that inode. The capability module checks whether any file capabilities are defined for the inode, and, if so, clears them. security_inode_killpriv is still in the kernel today, being called from notify_change when an inode is changed in "response to write or truncate": see dentry_needs_remove_privs /* Return mask of changes for notify_change() that need to be done as a * response to write or truncate... */ int dentry_needs_remove_privs(struct dentry *dentry)
Linux File Capabilities are lost when I modify the file. Is this expected behavior?
1,504,304,251,000
Inspired by this question here is the follow-up: As some of you may know setuid-binaries are dangerous, since some exploits use these to escalate their rights up to root. Now it seems that there has been an interesting idea to replace setuid with different, more secure means. How?
File system capabilities in Linux were added to allow more fine-grained control than setuid alone will allow. With setuid it's a full escalation of effective privileges to the user (typically root). The capabilities(7) manpage provides the following description: For the purpose of performing permission checks, traditional Unix implementations distinguish two categories of pro‐ cesses: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged pro‐ cesses (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivi‐ leged processes are subject to full permission checking based on the process's credentials (usually: effective UID, effective GID, and supplementary group list). Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled. Capabilities are a per-thread attribute. If an application needs the ability to call chroot(), which is typically only allowed for root, CAP_SYS_CHROOT can be set on the binary rather than setuid. This can be done using the setcap command: setcap CAP_SYS_CHROOT /bin/mybin As of RPM version 4.7.0, capabilities can be set on packaged files using %caps. Fedora 15 had a release goal of removing all setuid binaries tracked in this bug report. According to the bug report, this goal was accomplished. The wikipedia article on Capability-based security is good read for anyone interested.
How to replace setuid with file-system capabilities
1,504,304,251,000
Generally speaking, a unix (or specifically Linux) program can't do something like using ICMP_ECHO ("ping") to check the accessibility of a router unless either run by the superuser or setuid root or blessed with the appropriate POSIX capability. Obviously, on any competently-run system applying either setuid or a POSIX capability to a binary requires superuser intervention. If a development environment has been blessed with the CAP_SETFCAP capability, then it should be able to set appropriate POSIX capabilities on programs it builds, at least as far as local operation is concerned. With a nod to Ken Thompson's Reflections on Trusting Trust paper and assuming static linkage of all libraries it should, in principle, be possible to build a fingerprint into every program source module, to propagate that to object and binary files, and hence to provide an audit trail that demonstrates that a particular binary has been built from a particular collection of sources. As such, an administrator asked to bless a newly-built copy of the IDE should be able to satisfy herself that the IDE will only be able to set capabilities in programs it generates itself, and hasn't been modified by a malicious user so that he can use it as his personal copy of setcap by means of e.g. an undocumented startup option. The problem here is that most mature development environments (e.g. the Lazarus IDE) can build themselves, and as such if the local administrator blessed a provably-clean copy with CAP_SETFCAP a malicious user could rebuild it to include malicious code and apply CAP_SETFCAP to it himself, breaking the local system security. Is it possible to apply the POSIX CAP_SETFCAP capability to a binary, in such a way that the one thing it can't propagate to a newly-built program is another CAP_SETFCAP or one of its superset capabilities?
If you want to reserve the right to use setcap to just sudo enabled users, then there is no need to add any capability to it. Just do that. If you have some notion of trusted users, you have two options for using capabilities. I've also included a third option if you just want to act like you have a capability, and perhaps package files etc. Make a local copy of setcap, restrict its use to a common group (builders) of users and give it a permitted capability. This trusts user members of that group to behave well: $ cp $(which setcap) ./builder-setcap $ sudo chgrp builders ./builder-setcap $ chmod 0750 ./builder-setcap $ sudo ./builder-setcap cap_setfcap=p ./builder-setcap Use an Inheritable file capability to limit who can cause the builder-setcap to run with privilege. This will require another step to actually obtain that privilege at run time (some way for the running user to pre-obtain a process Inheritable capability, such as capsh or pam_cap.so). That mechanism might be a wrapper for the build system. With capsh it is something like this: $ cp $(which setcap) ./builder-setcap $ sudo chgrp builders ./builder-setcap $ chmod 0750 ./builder-setcap $ sudo ./builder-setcap cap_setfcap=i ./builder-setcap $ sudo capsh --user=$(whoami) --inh=cap_setfcap -- ... $ # in this shell cap_setfcap is available to ./builder-setcap The third mechanism is to use a user namespace container. In such a container, there is a fake notion of privilege and a fake root user as well. In that environment, the unprivileged user is transformed into root for this container, and can simply invoke setcap to grant it fake capabilities: $ unshare -Ur ... $ id uid=0(root) gid=0(root) groups=0(root),65534(nobody) ... $ cp $(which setcap) builder-setcap ... $ ./builder-setcap cap_setfcap=p ./builder-setcap In this last case, while inside the container, that file capability works, but once you exit it. The capability enabled file is not really capable: ... $ exit $ getcap -n ./builder-setcap ./builder-setcap cap_setfcap=p [rootid=1000] The -n argument here reveals the user namespace root identity. You can see that back outside the namespace, the file doesn't really have a file capability as follows: $ ./builder-setcap -r builder-setcap unable to set CAP_SETFCAP effective capability: Operation not permitted I think the most viable approach for what you are trying to do is to write yourself a build invocation wrapper that does the equivalent of sudo capsh --user=$(whoami) --inh=cap_setfcap -- -c 'exec builder', and prepare the builder-setcap binary as per method 2. Also, of course, you can write your own version of builder-setcap that filters down the file capabilities you are willing to bestow, if you are trying to grant some but not others. The total code complexity of setcap is pretty minimal.
Preventing POSIX capabilities proliferation
1,504,304,251,000
While investigating sharing the PID namespace with containers, I noticed something interesting that I don't understand. When a container shares the PID namespace with the host, some processes have their environmental variables protected while others do not. Let's take, for example, mysql. I'll start a container with a env variable set: ubuntu@sandbox:~$ docker container run -it -d --env MYSQL_ROOT_PASSWORD=SuperSecret mysql 551b309513926caa9d5eab5748dbee2f562311241f72c4ed5d193c81148729a6 I'll start another container which shares the host PID namespace and try to access the environ file: ubuntu@sandbox:~$ docker container run -it --rm --pid host ubuntu /bin/bash root@1c670d9d7138:/# ps aux | grep mysql 999 18212 5.0 9.6 2006556 386428 pts/0 Ssl+ 17:55 0:00 mysqld root 18573 0.0 0.0 2884 1288 pts/0 R+ 17:55 0:00 grep --color=auto mysql root@1c670d9d7138:/# cat /proc/18212/environ cat: /proc/18212/environ: Permission denied Something is blocking my access to read the environmental variables. I was able to find out that I need CAP_SYS_PTRACE to read it in a container: ubuntu@sandbox:~$ docker container run -it --rm --pid host --cap-add SYS_PTRACE ubuntu /bin/bash root@079d4c1d66d8:/# cat /proc/18212/environ MYSQL_PASSWORD=HOSTNAME=551b30951392MYSQL_DATABASE=MYSQL_ROOT_PASSWORD=SuperSecretPWD=/HOME=/var/lib/mysqlMYSQL_MAJOR=8.0GOSU_VERSION=1.14MYSQL_USER=MYSQL_VERSION=8.0.30-1.el8TERM=xtermSHLVL=0MYSQL_ROOT_HOST=%PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binMYSQL_SHELL_VERSION=8.0.30-1.el8 However, not all processes are protected in this way. For example, I'll start another container ubuntu container with a env variable set and run the tail command. ubuntu@sandbox:~$ docker container run --rm --env SUPERSECRET=helloworld -d ubuntu tail -f /dev/null 42023615a4415cd4064392e890622530adee1f42a8a2c9027f4921a522d5e1f2 Now when I run the container with the shared pid namespace, I can access the environmental variables. ubuntu@sandbox:~$ docker container run -it --rm --pid host ubuntu /bin/bash root@3a774156a364:/# ps aux | grep tail root 19056 0.0 0.0 2236 804 ? Ss 17:57 0:00 tail -f /dev/null root 19176 0.0 0.0 2884 1284 pts/0 S+ 17:58 0:00 grep --color=auto tail root@3a774156a364:/# cat /proc/19056/environ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=42023615a441SUPERSECRET=helloworldHOME=/root What mechanism is preventing me from reading the mysqld environmental variables and not the tail -f process?
What mechanism is preventing me from reading the mysqld environmental variables and not the tail -f process? The fact that you're running with a different user ID in the first case. If we start up your two examples: docker run --name mysql -it -d --env MYSQL_ROOT_PASSWORD=SuperSecret mysql:latest docker run --name tail -it -d --env MYSQL_ROOT_PASSWORD=SuperSecret ubuntu:latest tail -f /dev/null And then look at the resulting processes: $ ps -fe n |grep -E 'tail|mysqld' | grep -v grep 999 422026 422005 2 22:50 pts/0 Ssl+ 0:00 mysqld 0 422170 422144 0 22:50 pts/0 Ss+ 0:00 tail -f /dev/null We see that mysqld is running as UID 999, while the tail command is running as UID 0. When we start up a new container in the host pid namespace, we can only read the environ for processes that are owned by the same UID and GID. So this works, because by default a container runs with UID 0: $ docker run --rm --pid host ubuntu:latest cat /proc/422170/environ | tr '\0' '\n' PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=e89c069d4674 TERM=xterm MYSQL_ROOT_PASSWORD=SuperSecret HOME=/root And this fails: $ docker run --rm --pid host ubuntu:latest cat /proc/422026/environ | tr '\0' '\n' cat: /proc/422026/environ: Permission denied We can only read the environ file for a process running under a different UID or GID if we have the CAP_SYS_PTRACE capability. The logic for this check is in the ptrace_may_access function in the kernel: if (uid_eq(caller_uid, tcred->euid) && uid_eq(caller_uid, tcred->suid) && uid_eq(caller_uid, tcred->uid) && gid_eq(caller_gid, tcred->egid) && gid_eq(caller_gid, tcred->sgid) && gid_eq(caller_gid, tcred->gid)) goto ok; if (ptrace_has_cap(tcred->user_ns, mode)) goto ok; We can make that failing example work by having the container run with the same UID and GID as the mysql process: $ docker run -u 999:999 --rm --pid host ubuntu:latest cat /proc/422026/environ | tr '\0' '\n' MYSQL_PASSWORD= HOSTNAME=bde980104dcd MYSQL_DATABASE= MYSQL_ROOT_PASSWORD=SuperSecret PWD=/ HOME=/var/lib/mysql MYSQL_MAJOR=8.0 GOSU_VERSION=1.14 MYSQL_USER= MYSQL_VERSION=8.0.31-1.el8 TERM=xterm SHLVL=0 MYSQL_ROOT_HOST=% PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MYSQL_SHELL_VERSION=8.0.31-1.el8
What mechanism prevents me from reading /proc/<PID>/environ in containers with a PID namespace shared with the host?
1,504,304,251,000
I know that if a process is run setuid that it's protected against various things that could subvert the process, like LD_PRELOAD and ptrace (debugging). But I haven't been able to find anything on the same being done for capabilities. I assume the same sorts of things are done with capabilities, since otherwise it would have huge security holes, but I haven't been able to find it documented/verified.
As mentioned in this Kernel Mailing List message, whether a process needs extra security is checked in cap_bprm_secureexec() of the kernel file security/commoncap.c, which does check for capabilities. This is then exported to the process via the auxiliary vector. This can be accessed/tested via getauxval(AT_SECURE). I inserted getauxval(AT_SECURE) into a test program, and it did indeed return 1 when it was running with any capabilities set and usable, the same as it would if running setuid, so capabilities have the same security protections as setuid.
Security of capabilities vs setuid (LD_PRELOAD, etc)
1,504,304,251,000
The "effecive user/group ID" of a process is what the OS uses to determine whether an action (such as opening a file) is permitted by the process. You can set the effective primary GID of the current process using setegid, which can only be used by superusers (or if given the capability) to lower privileges temporarily. Supplementary GIDs are additional groups that are also used to check if an action is permitted by a process. For example, if a file is located under a directory structure /A/B/C/file.txt, and directories A, B, and C have read-access locked to their owners groupA, groupB, groupC respectively, the process would need all 3 groups in their supplementary groups or effective GID. There is a setgroups syscall which is analogous to setgid, meaning it changes the environment of the process permanently. Is there no need for an "effective" syscall of supplementary groups (i.e. setegroups)?
Such system call doesn't exist because the supplementary groups can be considered to be themselves the effective groups. The difference between real and effective UID and GIDs exists to allow processes to drop privileges, but also to allow users to raise the privileges with which some processes are called (via the setuid/setgid filesystem bits). In both cases we want to keep track of the real UID and GID of the user behind the process with the raised/lowered privileges (effective UID and GID). There is no need for that difference for supplementary groups because those can easily be recovered from the groups file. Note that, when raising or dropping privileges, an application would typically call initgroups to reset the groups to match the effective uid and gid of the new user (thus losing access to any other supplementary groups that could previously be in place). From another source: "The only use of setgroups is usually from the initgroups function, which reads the entire group file with the functions getgrent, setgrent, and endgrent, which we described earlier and determines the group membership for username. It then calls setgroups to initialize the supplementary group ID list for the user. One must be superuser to call initgroups, since it calls setgroups.
Why is there no "set effective supplementary GIDs" syscall?
1,504,304,251,000
In Linux, a process run by a non-root user can have some capabilities assigned to it to increase its privileges. And a process that's run by the root user has all of the capabilities available, but can such a process have some of its capabilities removed, either manually or automatically in certain situations?
Yes, the idea of capabilities is that the user id itself doesn't give any special abilities. An UID 0 process can also drop unneeded capabilities. It would still retain access to files owned by UID 0 (e.g. /etc/shadow or /etc/ssh/sshd_config), so switching to another UID would still likely be a smart thing to do in addition. We can test this with capsh, it allows us to drop capabilities as requested. Here, the last part is run as a shell script, and we can see that the chown fails since the ability to change file owners (CAP_CHOWN) was dropped: # capsh --drop=cap_chown -- -c 'id; touch foo; chown nobody foo' uid=0(root) gid=0(root) groups=0(root) chown: changing ownership of 'foo': Operation not permitted The capabilities(7) man page mentions that the system has some safeguards in place for setuid binaries that don't know about capabilities and might not deal well with a situation where some are permanently removed. See under "Safety checking for capability-dumb binaries". The same man page of course contains other useful information on capabilities, too.
Does a process run by root always have all of the capabilities available in Linux?
1,504,304,251,000
docker run --rm --cap-drop=net_bind_service --publish 8080:80 --name nginx nginx ps --forest -fC nginx UID PID PPID C STIME TTY TIME CMD root 449870 449847 0 12:38 ? 00:00:00 nginx: master process nginx -g daemon off; 101 449929 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449930 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449931 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449932 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449933 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449934 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449935 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449936 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449937 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449938 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449939 449870 0 12:38 ? 00:00:00 \_ nginx: worker process 101 449940 449870 0 12:38 ? 00:00:00 \_ nginx: worker process so the process doesn't have net_bind_service, yet it was able to start and bind to port 80. getpcaps 449870 449870: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap=ep see this: docker run --rm --privileged --pid container:nginx --network container:nginx -it --volumes-from nginx --name debug nixery.dev/shell/gnugrep/ps/libcap/htop/lsof/iproute2 bash bash-5.2# ps aufx USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 0 112 0.0 0.0 4936 4184 pts/0 Ss 19:55 0:00 bash 0 123 0.0 0.0 7240 2324 pts/0 R+ 19:56 0:00 \_ ps aufx 0 1 0.0 0.0 8940 6052 ? Ss 19:38 0:00 nginx: master process nginx -g daemon off; 101 29 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 30 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 31 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 32 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 33 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 34 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 35 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 36 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 37 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 38 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 39 0.0 0.0 9328 2564 ? S 19:38 0:00 nginx: worker process 101 40 0.0 0.0 9328 2568 ? S 19:38 0:00 nginx: worker process bash-5.2# ss -ltnp State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=40,fd=6),("nginx",pid=39,fd=6),("nginx",pid=38,fd=6),("nginx",pid=37,fd=6),("nginx",pid=36,fd=6),("nginx",pid=35,fd=6),("nginx",pid=34,fd=6),("nginx",pid=33,fd=6),("nginx",pid=32,fd=6),("nginx",pid=31,fd=6),("nginx",pid=30,fd=6),("nginx",pid=29,fd=6),("nginx",pid=1,fd=6)) LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=40,fd=7),("nginx",pid=39,fd=7),("nginx",pid=38,fd=7),("nginx",pid=37,fd=7),("nginx",pid=36,fd=7),("nginx",pid=35,fd=7),("nginx",pid=34,fd=7),("nginx",pid=33,fd=7),("nginx",pid=32,fd=7),("nginx",pid=31,fd=7),("nginx",pid=30,fd=7),("nginx",pid=29,fd=7),("nginx",pid=1,fd=7)) so why container was able to start and bind to port 80 yet the net_bind_service is not listed for the process caps?
Instead of enabling the privilege to bind to low ports (and other features), Docker lowers the value of the low port below which privilege is needed to 0, thus removing the barrier and allowing any unprivileged process to bind to any port: daemon/oci_linux.go: // allow opening any port less than 1024 without CAP_NET_BIND_SERVICE if sysctlExists("net.ipv4.ip_unprivileged_port_start") { s.Linux.Sysctl["net.ipv4.ip_unprivileged_port_start"] = "0" } unless it's the host's network or the configuration tells otherwise. Likewise a few lines above, ping (which used to require privilege) gets allowed on non-rootless containers.
docker run cap-drop=net_bind_service still has nginx running on the port 80
1,504,304,251,000
This command: sudo chown -R root:root directory will remove the SUID bit and reset all capabilities for files. I wonder why it's done silently and it's not mentioned in the man page. Weirdly the GUID bit is not removed. And it doesn't matter who the file or directory belonged to prior to running this command. Also SUID/GUID bits are not removed for directories (thought they are useless in this case). Presumably it's done in the name of security but to me it must not be done silently. This gets even worse: $ setcap cap_sys_rawio,cap_sys_nice=+ep test $ getcap -v test test cap_sys_rawio,cap_sys_nice=ep $ chown -c -v -R 0:0 . ownership of './test' retained as root:root ownership of '.' retained as root:root $ getcap -v test test The SUID bit for the test file is removed completely silently. It's as if the command is doing a lot more than requested.
The permissions and capability sets aren’t cleared by the chown utility, they’re cleared by the chown system call (on Linux): When the owner or group of an executable file is changed by an unprivileged user, the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify whether this also should happen when root does the chown(); the Linux behavior depends on the kernel version, and since Linux 2.2.13, root is treated like other users. In case of a non-group-executable file (i.e., one for which the S_IXGRP bit is not set) the S_ISGID bit indicates mandatory locking, and is not cleared by a chown(). When the owner or group of an executable file is changed (by any user), all capability sets for the file are cleared. As alluded to above, this is partially specified by POSIX: Unless chown is invoked by a process with appropriate privileges, the set-user-ID and set-group-ID bits of a regular file shall be cleared upon successful completion; the set-user-ID and set-group-ID bits of other file types may be cleared. If it were to inform the user about this, the chown utility would have to explicitly check for further changes made to files’ metadata when it invokes the chown function. As far as the rationale is concerned, I suspect it’s to reduce the potential for gotchas for the system administrator — chown root:root on Linux can be considered as safe, even if a user prepared a setuid binary ahead of time. The GNU chown man page doesn’t mention this behaviour, but as is often the case with GNU software, the man page documents the utility only partially; its “SEE ALSO” section points to the system call documentation (which is admittedly overkill for most users) and the info page, which does describe this behaviour: The chown command sometimes clears the set-user-ID or set-group-ID permission bits. This behavior depends on the policy and functionality of the underlying chown system call, which may make system-dependent file mode modifications outside the control of the chown command. For example, the chown command might not affect those bits when invoked by a user with appropriate privileges, or when the bits signify some function other than executable permission (e.g., mandatory locking). When in doubt, check the underlying system behavior. (I’m limiting this to Linux based on your tags on the question; since Linux restricts owner changes to privileged processes, there are fewer security implications than on some other Unix-style systems. See explanation on chown(1) POSIX spec for details.)
Why does chown reset/remove the SUID bit and reset capabilities?
1,504,304,251,000
Currently I'm trying to understand capabilities in Linux by reading http://man7.org/linux/man-pages/man7/capabilities.7.html I created a small C++ application with the capability CAP_DAC_READ_SEARCH+eip The capability works fine for the application. But I have a system() call inside system("cat /dev/mtdX > targetFile"); How I can inherit the capability to this call? Edit: I know that system() is driven by fork() + execl(). In the documentation is mentioned, that with fork() the child process get the same capabilities as the parent process. But why is the read capability not inherited?
Thx to @mosvy I implemented his solution with libcap and it seems to work as expected. void inheritCapabilities() { cap_t caps; caps = cap_get_proc(); if (caps == NULL) throw "Failed to load capabilities"; printf("DEBUG: Loaded Capabilities: %s\n", cap_to_text(caps, NULL)); cap_value_t cap_list[1]; cap_list[0] = CAP_DAC_READ_SEARCH; if (cap_set_flag(caps, CAP_INHERITABLE, 1, cap_list, CAP_SET) == -1) throw "Failed to set inheritable"; printf("DEBUG: Loaded Capabilities: %s\n", cap_to_text(caps, NULL)); if (cap_set_proc(caps) == -1) throw "Failed to set proc"; printf("DEBUG: Loaded Capabilities: %s\n", cap_to_text(caps, NULL)); caps = cap_get_proc(); if (caps == NULL) throw "Failed to load capabilities"; printf("DEBUG: Loaded Capabilities: %s\n", cap_to_text(caps, NULL)); if (prctl(PR_CAP_AMBIENT, PR_CAP_AMBIENT_RAISE, CAP_DAC_READ_SEARCH, 0, 0) == -1) throw "Failed to pr_cap_ambient_raise! Error: " + errno; } main() { inheritCapabilities(); char *catargv[5]; catargv[0] = (char *)"cmd"; catargv[1] = (char *)"arg1"; catargv[2] = (char *)"arg2"; catargv[3] = (char *)"arg3"; catargv[4] = NULL; if (execvp(catargv[0], catargv) == -1) throw "Failed! command"; }
Capability inheritable for system() call in C/C++
1,504,304,251,000
I am using rootless containers, according to the buildah docs, Moreover, pinging from a rootless container does not work because it lacks the CAP_NET_RAW security capability that the ping command requires. If you want to ping from within a rootless container, you can allow users to send ICMP packets using this sysctl command: # sysctl -w "net.ipv4.ping_group_range=0 2000000" This action would allow any process within these groups to send ping packets. I ran that sysctl command, and I checked the permissions on the container it says, Current IAB: ... !cap_net_raw ... So if you set net.ipv4.ping_group_range you do NOT need this capability? How are these two related?
There’s no direct relationship. CAP_NET_RAW is a capability which allows the use of raw and packet sockets, and binding to any address for transparent proxying. ping_group_range is a sysctl defining a group range allowed to open ICMP echo sockets. Both of these can be used to allow ping to send and receive ICMP echo packets, but they’re not a superset or subset of each other.
What is the relation between CAP_NET_RAW and net.ipv4.ping_group_range?
1,590,666,593,000
Does the root user bypass capability checking in the kernel, or is the root user subject to capability checking starting with Linux 2.2? May applications check for and deny access for the root user, if certain capabilities are dropped from its capability set? By default the root user has a full set of capabilities. The reason I'm asking is the following except from man capabilities: Privileged processes bypass all kernel permission checks However, nothing is said whether this rule still holds after Linux 2.2 release. Extra: Docker removes certain capabilities from the root user while starting a new container. However, Docker doesn't use user namespaces by default, so how is the root user's capabilities restored? man capabilities: For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process's credentials (usually: effective UID, effective GID, and supplementary group list). Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled. Capabilities are a per-thread attribute.
The root user can be constrained in its set of capabilities. From capabilities(7): If the effective user ID is changed from nonzero to 0, then the permitted set is copied to the effective set. This implies that in the capability model, becoming the root user does not grant all permissions, unlike in the traditional model, where it does. The capability model is used in Linux 2.2 and later. The bounding set of capabilities for a process is inherited from its parent. When Docker drops capabilities from the bounding set for the thread starting the container, those capabilities are dropped for the the container, affecting every process of that container, whether for the root user or otherwise. The capabilities that are left are inherited by the root user inside the container when it gains the user ID 0 (in the given namespace created by clone(2)). The scope of these capabilities are limited by the parameters passed to clone(2), which create new namespaces for various subsystems; cgroups; and any additional security subsystems, such as AppArmor or SELinux.
Does the root user bypass capability checking?
1,590,666,593,000
How do I enable CLONE_NEWUSER in a more fine-grained fashion compared to just kernel.unprivileged_userns_clone? I want to keep kernel API attack surface manageable by keeping new and complicated things like non-root CAP_SYS_ADMIN or BPF disabled, but also selectively allow it for some specific programs. For example, chrome-sandbox wants either CLOSE_NEWUSER or suid-root for proper operation, but I don't want all the programs to be able to use such complicated tricks, only a handful of approved ones.
Without creating a custom kernel patch, this isn't possible. Note that this particular Debian-specific sysctl is deprecated. The way to disable user namespaces is user.max_user_namespaces = 0. A new user namespace is created by kernel/user_namespace.c:create_user_ns(). There are several checks that occur prior to allowing the creation of a new namespace, but nothing indicates the ability to control this on a per-file or per-user basis. It's unfortunate, but many kernel developers don't understand the risk behind enabling unprivileged user namespaces on a global basis. A sample (untested!) patch to allow only UID 1234 to create a new namespace in kernel 6.0: --- a/kernel/user_namespace.c +++ b/kernel/user_namespace.c @@ -86,6 +86,10 @@ int create_user_ns(struct cred *new) struct ucounts *ucounts; int ret, i; + ret = -EPERM; + if (!uid_eq(current_uid(), KUIDT_INIT(1234))) + goto fail; + ret = -ENOSPC; if (parent_ns->level > 32) goto fail;
How do I enable unprivileged_userns_clone selectively for one executable or user?
1,590,666,593,000
In a program I'm enumerating network namespaces by scanning /proc/pid/ for ns/net (sym) links. This program runs inside the "root" namespaces (original init) of the host itself. Normally, I need to run the scanner part as root, as otherwise I will have only limited access to other processes' /proc/pid/ information. I would like to avoid running the scanner as root if possible, and I would like to avoid the hassle of dropping privileges. Which Linux capability do I need to set for my scanner program so it can be run by non-root users and still see the complete /proc/pid/ tree and read network namespace links?
After some trial and error, I found out that in fact CAP_SYS_PTRACE is needed. In contrast, CAP_DAC_READ_SEARCH and CAP_DAC_OVERRIDE don't give the required access, which includes readlink() and similar operations. What I'm seeing can be cross-checked: first, ptrace.c gives the necessary clue in __ptrace_may_access(): /* May we inspect the given task? * This check is used both for attaching with ptrace * and for allowing access to sensitive information in /proc. * * ptrace_attach denies several cases that /proc allows * because setting up the necessary parent/child relationship * or halting the specified task is impossible. */ And second, the nsfs-related functions, such as proc_ns_readlink() (indirectly) call __ptrace_may_access(). And finally, man 7 namespaces mentions: The symbolic links in this subdirectory are as follows: [...] Permission to dereference or read (readlink(2)) these symbolic links is governed by a ptrace access mode PTRACE_MODE_READ_FSCREDS check; see ptrace(2).
Access /proc/pid/ns/net without running query process as root?
1,590,666,593,000
I've been inspired to start playing around with Linux capabilities again, my pet project is to replace the setuid on a lot of the binaries and provide access to additional privileged utilities to non-root users. Doing this by adding the relevant capabilities (+ei, issue is moot with +ep) via setcap and configure my personal user account (jdavis4) to have those capabilities assigned to its session at login via pam_cap.so and it's been going smashingly. I can give individual users access to "ping" and "kill" via capability.conf The problem I'm having, though, is that it occurred to me that if this were a production system an administrator would probably want to assign capabilities by some sort of aggregate unit so that they don't have to do this for each individual user every time they make one. This way a user can just be added to the "filesystemAdmin" group and get stuff like CAP_DAC_OVERRIDE or added to "ProcessManagement" and getting stuff like CAP_SYS_NICE and CAP_SYS_KILL. Is this currently possible?
What you want to do is not possible. Not only does pam_cap only manipulate the inheritable capabilities (so it does not actually grant any permitted/effective capability at all), it also only deals with users and not groups (not even primary groups).
Is it possible to specify groups in /etc/security/capability.conf?
1,590,666,593,000
I've been working on writing my own Linux container from scratch in C. I've borrowed code from several places and put up a basic version with namespaces & cgroups. Basically, I clone a new process with all the CLONE_NEW* flags to create new namespaces for the clone'ed process. I also set up UID mapping by inserting 0 0 1000 into the uid_map and gid_map files. I want to ensure that the root inside the container is mapped to the root outside. For the filesystem, I am using a base image of stretch created with debootstrap. Now, I am trying to set up the network connectivity from inside the container. I used this script to setup the interface inside the container. This script creates a new network-namespace of its own. I edited it slightly to mount the net-namespace of the created process onto the newly created net-namespace via the script. mount --bind /proc/$PID/ns/net /var/run/netns/demo I can just get into the new network namespace as follows: ip netns exec ${NS} /bin/bash --rcfile <(echo "PS1=\"${NS}> \"") and successfully ping outside. But from the bash shell when I get inside the clone'ed process by default I am unable to PING. I get the error: ping: socket: Operation not permitted I've tried setting up capabilities: cap_net_raw and cap_net_admin I would like some guidance.
I would prefer to work from a more complete specification. However from careful reading of the script and your description, I conclude you are entering a network namespace (using the script) first, and entering a user namespace afterwards. The netns is owned by the initial userns, not your child userns. To do ping, you need cap_net_raw in the userns that owns the netns. I think. There is a similar answer here, which provides links to reference documentation: Linux Capabilities with User Namespaces (I think ping can also work without privilege if you have access to ICMP sockets. But at least on my Fedora 29, this does not seem to be used. Unprivileged cp "$(which ping)" && ping localhost shows the same socket: Operation not permitted. Not sure why it has not been adopted).
Ping not working in a new C container
1,590,666,593,000
Consider the following transcript of a user-namespaced shell running with root privileges (UID 0 within the namespace, unprivileged outside): # cat /proc/$$/status | grep CapEff CapEff: 0000003cfdfeffff # ls -al total 8 drwxrwxrwx 2 root root 4096 Sep 16 22:09 . drwxr-xr-x 21 root root 4096 Sep 16 22:08 .. -rwSr--r-- 1 nobody nobody 0 Sep 16 22:09 file # ln file link ln: failed to create hard link 'link' => 'file': Operation not permitted # su nobody -s /bin/bash -c "ln file link" # ls -al total 8 drwxrwxrwx 2 root root 4096 Sep 16 22:11 . drwxr-xr-x 21 root root 4096 Sep 16 22:08 .. -rwSr--r-- 2 nobody nobody 0 Sep 16 22:09 file -rwSr--r-- 2 nobody nobody 0 Sep 16 22:09 link Apparently the process has the CAP_FOWNER permission (0x8) and thus should be able to hardlink to arbitrary files. However, it failes to link the SUID'd test file owned by nobody. There is nothing preventing the process from switching to nobody and then linking the file, thus the parent namespace does not seem to be the issue. Why can't the namespaced UID 0 process hardlink link to file without switching its UID?
The behavior described in the question was a bug, which has been fixed in the upcoming Linux 4.4.
Why can't a UID 0 process hardlink to SUID files in a user namespace?
1,590,666,593,000
I am experimenting with capsh of libcap2-bin (1:2.32-1), but have found that I'm unable to use the == argument to re-exec capsh. In particular, when I'm using the capsh's == argument, it's complaining that it couldn't execve(2) the /bin/bash shell? Does anyone experiment a similar problem? ls -la /bin/bash -rwxr-xr-x 1 root root 1183448 Jun 18 2020 /bin/bash capsh --help ... == re-exec(capsh) with args as for -- ... capsh == --print execve /bin/bash failed! capsh --print Current: = Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read Ambient set = Securebits: 00/0x0/1'b0 secure-noroot: no (unlocked) secure-no-suid-fixup: no (unlocked) secure-keep-caps: no (unlocked) secure-no-ambient-raise: no (unlocked) uid=1000(parallels) euid=1000(parallels) gid=1000(parallels) groups=4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),120(lpadmin),131(lxd),132(sambashare),1000(parallels) Guessed mode: UNCERTAIN (0)
The issue is that it's trying to re-execute itself as capsh (or whatever command name and path you started it with). This is from strace capsh == --print: execve("capsh", ["capsh", "--print"], [/* 20 vars */]) = -1 ENOENT (No such file or directory) write(2, "execve /bin/bash failed!\n", 25execve /bin/bash failed! ) = 25 So it's not really "execve /bin/bash" that fails but execve capsh. The execve() function does not do lookups in $PATH. Using capsh with its full path would make it work. $ command -v capsh /sbin/capsh $ /sbin/capsh == --print Current: = [... etc. ...] See also the execve(2) manual on your system (man 2 execve).
Unable to use the `==` argument of `capsh` to re-exec it?
1,590,666,593,000
I have trouble understanding the root-equivalence of CAP_CHROOT given the below template. I understand that 1) means to create a directory structure, containing all dependencies (e.g. shared objects), whose root will be the target for chroot(2). My question concerns the latter steps in the template: Why is it necessary to backdoor ld.so or libc in 2)? Why is it necessary to create a hardlink to a setuid-root binary from the chroot environment in 3)? Why call chroot(2) to launch setuid-root binary in 4)?
It's not necessary, it's one way to do it. By changing the root directory, you're invalidating the assumptions that components of the system can make. /bin/su works on the assumption that the user database is in /etc/passwd//etc/shadow, that the libc (or any library it's linked to) is in some fixed location in /lib that no ordinary user can modify. If you're able to create a different filesystem layout where the same /bin/su command can be run but with a different /etc/passwd or different libc which you can modify at will, then you can do anything (as su uses or can use (possibly indirectly) /etc/passwd to authenticate user, and runs code in the libc). Now, with that approach, having CAP_CHROOT is not the only thing that you need. You also need write access to a directory on a filesystem (as hardlinks can only be done within a given filesystem) that has at least one dynamically linked setuid-root executable. Systems where the system partitions have no user-writable area (or even are read-only) are not uncommon. It's also common to have filesystems with user-writable areas mounted with a nosuid flag. Many systems also forbid hardlinking files you don't own (see the fs.protected_hardlinks sysctl on Linux 3.6+ for instance). But you don't need to hardlink the setuid executable inside your chroot jail. You can also do: chdir("/"); chroot("/tmp/myjail"); execl("bin/su", "su", 0); as even though the root of the process is changed by chroot, the current working directory will still be available afterward even though that / directory and the bin/su resolved from there are not inside the jail. And /bin/su will still look for ld.so, /etc/passwd or the libc inside your jail since they are accessed via absolute paths, so relative to the changed root directory. Leaving the current working directory or any file descriptor open on a file outside the jail gives you a door out of the jail.
Why is CAP_CHROOT equivalent to root?
1,590,666,593,000
I am teaching an Operating Systems course and trying to wrap my mind around the fork/execve technique for creating new processes. My current understanding is that a fork make a complete copy of the old process, establishes a new PID and parent/child relationship, but otherwise does very little else. On the other hand, after the child process is created, it runs execve to replace most of its memory with the new process. For example, the program code, stack, and heap are completely replaced and started from scratch as a new program. But not everything is replaced in the new process. The child process inherits file descriptors (which allows pipes to be set up before the execve), the process ID (PID) and user ID (UID) and some permissions (man page). I imagine the full list of properties that are NOT replaced by an execve call is quite long, but are there any other key properties like the ones I mentioned above that I'm missing?
Since we’re discussing Linux specifically (at least, I take it that’s what you want since you used the linux tag), the fork and execve manpages are the appropriate references; they list all the attributes which aren’t preserved. Most of this behaviour is specified by POSIX, but there are some Linux specificities. The man pages don’t list attributes which are preserved, focusing instead on those which aren’t: All process attributes are preserved during an execve(), except the following: etc. I won’t try to answer your question by listing all the attributes which are preserved. However I will point out one key property which is preserved, and which you haven’t listed: ignored and default signals are preserved across execve. This means that a parent can ignore a signal (at least, signals that can be ignored) and that behaviour will be propagated to any children. This is what allows nohup to work. You can find a complete list of process attributes, with an explanation of what happens to them on exec() or fork(), in section 28.4 of The Linux Programming Interface.
What properties of an unprivileged process are preserved during an `execve` call?
1,590,666,593,000
In my openSUSE 13.2 machine I can apply commands like setcap and getcap on some application. I moved that application to an openSUSE 12 machine that does not have capabilities installed. In the 13.2 machine I have packages libcap-ng0, libcap1, libcap1-32bit, libcap2 and libcap2-32bit installed, I installed the same packages on the openSUSE 12 machine and I still get this message: If 'setcap' is not a typo you can use command-not-found to lookup the package that contains it Even on root. I just don't remember how I got capabilities to work on my machine. How to install these commands so I can set file capabilities ?
You need libcap-progs sudo zypper install libcap-progs
How to install Linux capabilities like setcap and getcap?
1,590,666,593,000
Documentation says that capabilities are per-thread attributes. Indeed in any /proc/[PID]/task/[LWP]/status we can find capabilities, related to this thread: CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 0000003fffffffff CapAmb: 0000000000000000 But at the same time similar information about capabilities is located in /proc/[PID]/status So process, obviously has its own capabilities. That confuse me - are capabilities attributes of a process or a thread? And what set is checked by kernel, when some capability-requiring command is performed?
Capabilities are indeed per-thread, and a thread can change its own capabilities (as allowed by the current capabilities) using capset without affecting other existing threads’ capabilities. /proc/[PID]/status shows the capabilities for the thread matching the pid, or more accurately, the thread group id (which is the process id in Linux). The kernel always checks the capabilities of the relevant thread.
Are capabilities per-process or per-thread attributes?
1,590,666,593,000
Is there any linux capability to enable normal users to write into root owned files like /etc/resolv.conf and /etc/fstab`?
No. There's a capability that allows accessing arbitrary files regardless of permissions (CAP_DAC_OVERRIDE), but it's almost equivalent to granting root access (if you can overwrite /etc/passwd, with most configurations, you're in): it's only useful for processes that perform a specific task (for example a backup program), not to grant to a user. And there's no capability that allows bypassing permissions for specific files: capabilities are boolean, they aren't parametrized by a list of files. It would be pretty much pointless anyway because there's already a mechanism to allow users to write to specific files: permissions. Create a group, add the users to the group, and grant the group write access to the file. addgroup fstab-writers adduser alice fstab-writers # Note that this only takes effect when alice logs in, not in her already-running session(s). chgrp fstab-writers /etc/fstab chmod g+w /etc/fstab If more than one group needs particular permissions on the file, use an access control list instead of chgrp and chmod. setfacl -m g:fstab-writers:rw /etc/fstab Note that if a system program overwrites the file in question, there's no guarantee that it'll reproduce the group ownership or the access control list. But if that's the case, you probably shouldn't be modifying this file manually anyway. Also note that for both /etc/fstab and /etc/resolv.conf, there are well-established mechanisms that don't require giving users write permissions. Giving a user write permission on /etc/fstab is equivalent to allowing them to run arbitrary commands as root. The easiest way is to mount a filesystem image with a setuid root executable, and there are others. If you want to allow users to mount filesystems, they can use udisks (which is what desktop environments use under the hood) or pmount. /etc/resolv.conf is typically managed automatically by NetworkManager, which can be controlled by non-root users. This is what desktop environments use under the hood and there is also has a command line interface (nmcli). Even in the absence of NetworkManager, many distributions ship resolvconf to manage it automatically when the network connection changes.
Linux capabilities for modifying files owned by root?
1,590,666,593,000
I've read in another answer that on Android the su binaries avoid needing to be setuid by using filesystem capabilities like cap_setuid. But then I tried to check this, and to my surprise, I found no capabilities set on my Magisk-enabled Android 8.0 system. Here's how I checked: Logged in via SimpleSSHD scp'ed the following binaries taken from Debian arm64 packages libcap2, libcap2-bin and libc6: getcap libc.so.6 libcap.so.2.25 libcap.so.2 ld-2.27.so Had the following terminal session on the phone: $ su # whoami root # exit $ type su su is /sbin/su $ ls -lh /sbin/su lrwxrwxrwx 1 root root 12 2018-08-12 22:40 /sbin/su -> /sbin/magisk $ ls -lh /sbin/magisk -rwxr-xr-x 1 root root 94 2018-08-12 22:40 /sbin/magisk $ sed 's@^@> @' /sbin/magisk > #!/system/bin/sh > unset LD_LIBRARY_PATH > unset LD_PRELOAD > exec /sbin/magisk.bin "${0##*/}" "$@" $ ls -lh /sbin/magisk.bin -rwxr-xr-x 1 root root 71K 2018-08-12 22:40 /sbin/magisk.bin $ file /sbin/magisk.bin /sbin/magisk.bin: ELF shared object, 32-bit LSB arm, dynamic (/system/bin/linker), stripped $ LD_LIBRARY_PATH=. ./ld-2.27.so ./getcap -v /sbin/magisk.bin /sbin/magisk.bin As you can see, neither setuid bit, nor any capabilities are present in the /sbin/magisk.bin binary. So what's going on? How does it work?
It appears that /sbin/magisk.bin the non-root user launches doesn't spawn the root shell by itself. Instead, it communicates its request to magiskd, which is executed as root. And magiskd, after checking for permissions, executes the command requested. (Interestingly, magiskd is the same binary – /sbin/magisk.bin, but run by init as root). You can check this as follows: $ echo $$ 27699 $ su # echo $PPID 2606 # exit $ su # echo $PPID 2606 # ps -A|egrep '^[^ ]+ +2606' root 2606 1 16044 2068 __skb_recv_datagram eb4d2fe0 S magiskd Note that in the output above, after we exit the superuser shell and re-enter it, parent PID still remains the same (2606 in this session), but not equal to the PID of the original non-root shell (27699 in this session). Moreover, parent PID of magiskd is 1, i.e. init, which is one more confirmation that it's not what we started from our non-root shell.
How does Magisk on Android work as su without setuid and capabilities?
1,590,666,593,000
I'm trying to understand POSIX-capabilities principles, their transformation during execve() to be more specific. I'll put some quotes from documentation during my question: P'(ambient) = (file is privileged) ? 0 : P(ambient) P'(permitted) = (P(inheritable) & F(inheritable)) | (F(permitted) & P(bounding)) | P'(ambient) P'(effective) = F(effective) ? P'(permitted) : P'(ambient) P'(inheritable) = P(inheritable) [i.e., unchanged] P'(bounding) = P(bounding) [i.e., unchanged] where: P() denotes the value of a thread capability set before the execve(2) P'() denotes the value of a thread capability set after the execve(2) F() denotes a file capability set According to this, first of all we check, whether executable file is privileged or not. Privileged file is defined there as one that has capabilities or has the set-user-ID or set-group-ID bit enabled. When determining the transformation of the ambient set during execve(2), a privileged file is one that has capabilities or has the set-user-ID or set-group-ID bit set. Then we check, whether file's effective bit is enabled or not. So now we have 4 situations based on two checks: Privileged file has effective bit enabled -> file has capabilities that has to be considered -> calculate new capabilities Unprivileged file has effective bit disabled -> file has no capabilities -> use thread's ambient set Privileged file has effective bit disabled -> file probably has setuid/setguid bit enabled. I assume, that means, that capabilities shouldn't be used at all not to mix two different permission tools -> thread's effective set becomes 0 I can't understand 4th case, though. Unprivileged file have effective bit enabled. It has no capabilities (since it's unprivileged), so How possibly can unprivileged file has effective bit enabled? Even if effective bit doesn't affect file's privilege status, what for should we set it enabled without permitted or effective capabilities? So, my question is, what specific situation can possibly arise to lead to this 4th case?
I thought that this situation couldn’t arise, i.e. that the effective bit couldn’t be set if there is no permitted or inherited capability. The behaviour seen with setcap appears to confirm this: $ sudo setcap cap_chown=ep mybinary $ getcap mybinary mybinary = cap_chown+ep $ sudo setcap cap_chown=e mybinary $ getcap mybinary mybinary = However, as you discovered, it is possible to set the effective bit, even though no capabilities are stored: $ xattr -l mybinary 0000 01 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 ................ 0010 00 00 00 00 .... These values represent a vfs_cap_data structure, version 2 (0x02000001). The last bit set in the first 32-bit value indicates that these are effective capabilities; but the capabilities (inherited and permitted) are all set to 0.
How can an unprivileged file have an enabled effective bit? (POSIX capabilities)
1,590,666,593,000
C code here: #include <stdio.h> #include <stdlib.h> int main () { printf("PATH : %s\n", getenv("PATH")); printf("HOME : %s\n", getenv("HOME")); printf("ROOT : %s\n", getenv("ROOT")); printf("TMPDIR : %s\n", getenv("TMPDIR")); return(0); } after doing: gcc env.c -o printenv setcap 'cap_dac_override+eip' printenv sudo -S su -s $(which bash) steve export TMDIR=hello ./printenv I got this output: PATH : /sbin:/bin:/usr/sbin:/usr/bin HOME : /home/steve ROOT : (null) TMPDIR : (null) If I remove the CAP set to 'printenv' the output is: PATH : /sbin:/bin:/usr/sbin:/usr/bin HOME : /home/steve ROOT : (null) TMPDIR : hello How could this be? After some searching, I found this: http://polarhome.com/service/man/?qf=secure_getenv&tf=2&of=RedHat&sf= It mentions that it might be due to when capabilities are set getenv becomes secure_getenv therefore all getenv() lib calls return nil, However in that case how come PATH and HOME environment variables are printed?
OK, after some digging I found out the reason behind this. "TMPDIR" is among those special variables that ld. so ignores when CAP is set and the running user is non-root for security concerns. For more details please see the man page of ld. so: https://man7.org/linux/man-pages/man8/ld.so.8.html. ENVIRONMENT top Various environment variables influence the operation of the dynamic linker. Secure-execution mode For security reasons, if the dynamic linker determines that a binary should be run in secure-execution mode, the effects of some environment variables are voided or modified, and furthermore those environment variables are stripped from the environment, so that the program does not even see the definitions. Some of these environment variables affect the operation of the dynamic linker itself, and are described below. Other environment variables treated in this way include: GCONV_PATH, GETCONF_DIR, HOSTALIASES, LOCALDOMAIN, LOCPATH, MALLOC_TRACE, NIS_PATH, NLSPATH, RESOLV_HOST_CONF, RES_OPTIONS, TMPDIR, and TZDIR. A binary is executed in secure-execution mode if the AT_SECURE entry in the auxiliary vector (see getauxval(3)) has a nonzero value. This entry may have a nonzero value for various reasons, including: * The process's real and effective user IDs differ, or the real and effective group IDs differ. This typically occurs as a result of executing a set-user-ID or set-group-ID program. * A process with a non-root user ID executed a binary that conferred capabilities to the process. * A nonzero value may have been set by a Linux Security Module.
why a non root user call getenv on an exported variable returns nil
1,590,666,593,000
Obviously, it can not rwx files that it doesn't have permission to. But I am talking about other "actions", I know of them is binding to ports with lower number than 1024. What else ?
Non-privileged processes can’t do a lot of things; on Linux, man 7 capabilities contains a comprehensive list. Examples beyond your two include: controlling auditing; setting up BPF; changing file ownership to arbitrary values; opening raw sockets; changing to arbitrary users and groups; setting up arbitrary namespaces; loading or unloading kernel modules; rebooting. Note that on Linux, all this isn’t controlled only by root, but also by capabilities, so subsets of these privileges can be granted to non-root processes. There are also other mechanisms to request a privileged process to perform a privileged operation on behalf of a non-privileged user (e.g. rebooting).
Which actions a non-root process can't do?
1,590,666,593,000
I'd like to set different capabilities to permitted and inherited sets of my file. Something like this: sudo setcap cap_fsetid=ei mybinary sudo setcap cap_kill=ep mybinary However, the latter command overrides the former one. Is it even possible to can manage capabilities this way?
You can use getcap to get the list of currently set capabilities. Since these are whitespace separated you can add more like this: sudo setcap first_capability=itsvalue executable_fname sudo setcap "$(getcap executable_fname) newcap=value" executable_fname (the capability list being whitespace separated: as cited in man setcap, that's described in man cap_from_text) Caveat: querying before setting is not safe from race conditions – make sure no other process is concurrently setting capabilities. If you know all capabilities at any given point in time, it's easier: sudo setcap "first_capability=itsvalue newcap=value" executable_fname In this context, + and = can be used to raise the capabilities. Using = here is easier to read, but there are some corner cases where + builds up state more predictably. Specifically where the listed capabilities for two different assignments have a common subset. For example these two are not equivalent: $ sudo setcap "first=i second=p first,second=e" executable $ sudo setcap "first=i second=p first,second+e" executable The 2nd of them is equivalent to: $ sudo setcap "first=ie second=pe" executable
Can two different sets of capabilities be set to one file?
1,590,666,593,000
I'd like to run a service as a non-privileged user, but it needs to bind to a system port number (i.e. less than 1024), so I give it setcap 'cap_net_bind_service=+ep' <path for service>, all good. Problem is, on startup, the service reads environment vars and for some reason it can't do that when it has cap_net_bind_service. So, with two copies of the executable, one with cap_net_bind_service, one without, only the one without can read environment vars. It's as though there's a default set of capabilities that allows reading env vars, but the exe loses that capability when I give it cap_net_bind_service. Is that right, or is something else going on? What additional capability might I need to give to the service so that it can read env vars? There's nothing in capability.h that jumps out as being "allow env var reading"?
I got to the bottom of it. In brief, the binary is using secure_getenv to access environment variables. This returns null, instead of accessing the variables, when the binary is run in "secure execution" mode (AT_SECURE=1). Having any capability set, causes it to run in this mode. Confirm that the binary is using secure_getenv using readelf: readelf -a <path for service> | grep getenv 00000040ac28 004b00000007 R_X86_64_JUMP_SLO 0000000000000000 secure_getenv@GLIBC_2.17 + 0 00000040ad90 007a00000007 R_X86_64_JUMP_SLO 0000000000000000 getenv@GLIBC_2.2.5 + 0 75: 0000000000000000 0 FUNC GLOBAL DEFAULT UND secure_getenv@GLIBC_2.17 (6) 122: 0000000000000000 0 FUNC GLOBAL DEFAULT UND getenv@GLIBC_2.2.5 (2) Confirm that it's running in secure execution mode with environment variables (how ironic!) LD_DEBUG=all and/or LD_SHOW_AUXV (see man ld.so). If it isn't then LD_SHOW_AUXV produces output with AT_SECURE set to 0. There is no output from LD_SHOW_AUXV when it is running in secure execution mode. Normally, there is/isn't output from LD_DEBUG too when it is/isn't running in secure execution mode. However, if /etc/suid-debug is present (empty file, create with touch), then LD_DEBUG will produce output when running in secure execution mode. See man getauxval for more information about AT_SECURE and secure execution mode.
linux capabilities to read environment variables?
1,590,666,593,000
I have an application that I want to be able to change the hostname in Linux. Currently doing so by running the hostname command. I don't want to set CAP_SYS_ADMIN either. I also don't want to edit /etc/hostname and reboot. Is there a capability that only just allows changing the hostname? If not what are my options?
Setting the hostname in linux is done via the sethostname(2) syscall. And /bin/hostname is a bare wrapper around this syscall (and a few related syscalls). /etc/hostname is supposed to be read during the boot process by some script, who subsequently runs /bin/hostname to complish its job. CAP_SYS_ADMIN is one of linux capabilities(7), allows a thread to perform various system administration operations, which include sethostname. I'm not aware of a smaller ganularity within the capabilities framework. However there are other options. We can grant some user ability to run some command as another user, by sudo(8), in a customisable manner. This example sudoers(5) configuration will allow user alice to run /bin/hostname as root. alice ALL=(root:ALL) /bin/hostname As described in this superuser question, the first "ALL" can be replaced by the hosts where the command is run on, not of use unless in a cooperative environment. "root" can be replaced by "ALL" to allow alice to run as any user. Second "ALL" can be replaced by the groups. The last field is commands alice is allowed to run. As /bin/hostname has limited usage, I guess it is fine. Otherwise we may have it followed by an argument, thus alice cannot run the command without this argument, to restrain the power.
Set hostname without root, and without CAP_SYS_ADMIN
1,590,666,593,000
I am developing a daemon started by upstart (Ubuntu 14.04) which needs to run as a non-privileged user (for security), but bind privileged port 443. I am using setcap to set the CAP_NET_BIND_SERVICE capability for the executable (it's not a script). I am setting it in Permitted, Effective, and Inherited sets (setcap 'cap_net_bind_service+eip' EXEC). I can su to the non-privileged user, and run it directly, and it works perfectly. It correctly binds the port, and /proc/PID/status shows the proper capabilities masks with 0x400 bit set. But when I start the service via upstart it does not run with the capabilities specified for the binary, and the bind() fails (EPERM). /proc/PID/status shows capabilities masks are all 0. Any ideas?
I'm now thinking this is a bug, and related to the way upstart starts services with "expect daemon" (i.e. services that fork twice upon startup). I notice that if I use strace on a process that is using capabilities(7) the capabilities are also ignored. I suspect that upstart, in order to determine the PID to wait on, traces a service specified with "expect daemon" long enough to obtain the PID, and that's causing the kernel capabilities mechanism to fail. So the bug is in the way that capabilities interact with process tracing, and the fact that upstart uses process tracing when starting a service with "expect daemon" (this is supposition). As a simple test: Write a small C PROGRAM to bind to port 443 (you cannot use an interpreted language such as python with capabilities(7)). Run it as non-root, and see that it fails to bind due to lack of privilege. Set the CAP_NET_BIND_SERVICE capability for your PROGRAM (as root run setcap 'cap_net_bind_service+epi' PROGRAM) Run it as non-root, and see that it now succeeds. Now run it with strace, and see that it now fails. (note that in step 3 strictly speaking the Inherited capability set (i flag) does not need to be modified for this test, but it does for a process that forks() such as my daemon). I'll file a bug against the kernel about this, since nothing on the capabilities(7) man page says it should not work with process tracing.
Can't get upstart service to honor capabilities(7)
1,590,666,593,000
There is a non-capability-aware program that requires at least 1) cap_sys_admin and 2) either cap_dac_override or cap_dac_read_search. This can be proven as follows: sudo setcap 'all=ep cap_sys_admin-ep' ./binary` # ./binary doesn't work sudo setcap 'all=ep cap_dac_override-ep' ./binary` # ./binary works sudo setcap 'all=ep cap_dac_read_search-ep' ./binary # ./binary works sudo setcap 'all=ep cap_dac_override,cap_dac_read_search-ep' ./binary # ./binary doesn't work I want to do the same checks using capsh instead of setcap. Before these checks, all file capabilities are removed using sudo setcap -r ./binary. The first tree succeed, the results match setcap: sudo capsh --user=jdoe --keep=1 --caps="all=eip" --addamb="all" --delamb="cap_sys_admin" -- -c ./binary sudo capsh --user=jdoe --keep=1 --caps="all=eip" --addamb="all" --delamb="cap_dac_override" -- -c ./binary sudo capsh --user=jdoe --keep=1 --caps="all=eip" --addamb="all" --delamb="cap_dac_read_search" -- -c ./binary The last one fails, the program still works while it shouldn't: sudo capsh --user=jdoe --keep=1 --caps="all=eip" --addamb="all" --delamb="cap_dac_override,cap_dac_read_search" -- -c ./binary Is there some difference between filesystem and process capabilities that I fail to notice? How do I write the third test properly?
So I think the answer to your question lies in what your program is doing. (In general, it always good to provide some simplified source code with your question to reproduce what you are seeing.) I've quickly coded something up (in Go, because its slightly less code to generate debug output than C and libcap, and it provides a working example of the cap go package). This is binary.go: package main import ( "log" "os" "kernel.org/pub/linux/libs/security/libcap/cap" ) func confirm(c *cap.Set, val cap.Value) int { on, err := c.GetFlag(cap.Effective, val) if err != nil { log.Fatalf("unable to confirm %q in effective set: %v", val, err) } log.Printf("%q in effective set of %q is: %v", val, c, on) if on { return 0 } return 1 } func fail() { log.Print("FAILURE") os.Exit(1) } func main() { c := cap.GetProc() if confirm(c, cap.SYS_ADMIN) != 0 { fail() } if confirm(c, cap.DAC_OVERRIDE)+confirm(c, cap.DAC_READ_SEARCH) > 1 { fail() } log.Print("SUCCESS") } Compile it as follows: $ go mod init binary $ go mod tidy $ go build binary.go $ ./binary 2022/09/10 16:45:56 "cap_sys_admin" in effective set of "=" is: false 2022/09/10 16:45:56 FAILURE $ echo $? 1 This program, binary, has all of the properties that you describe and works the way you expect it to. Where things differ between the file capability version and the Ambient inheritance version (the one that uses capsh) is there are Inheritable process capabilities present: $ sudo setcap 'all=ep cap_dac_override,cap_dac_read_search-ep' ./binary $ ./binary 2022/09/10 16:50:37 "cap_sys_admin" in effective set of "=ep cap_dac_override,cap_dac_read_search-ep" is: true 2022/09/10 16:50:37 "cap_dac_override" in effective set of "=ep cap_dac_override,cap_dac_read_search-ep" is: false 2022/09/10 16:50:37 "cap_dac_read_search" in effective set of "=ep cap_dac_override,cap_dac_read_search-ep" is: false 2022/09/10 16:50:37 FAILURE $ sudo setcap -r binary $ sudo capsh --user=$(whoami) --keep=1 --caps="all=eip" --addamb="all" --delamb="cap_dac_override,cap_dac_read_search" -- -c ./binary 2022/09/10 16:52:21 "cap_sys_admin" in effective set of "=eip cap_dac_override,cap_dac_read_search-ep" is: true 2022/09/10 16:52:21 "cap_dac_override" in effective set of "=eip cap_dac_override,cap_dac_read_search-ep" is: false 2022/09/10 16:52:21 "cap_dac_read_search" in effective set of "=eip cap_dac_override,cap_dac_read_search-ep" is: false 2022/09/10 16:52:21 FAILURE That is, you see "=ep" in the file capability version and "=eip" in the Ambient one. The ".i." part is not a capability that is useful to the program directly, it only comes into play when a program is executed. I think your code might be checking for Inheritable process capabilities. Again, these are not privilege on their own. They only represent privilege when they are combined with file Inheritable capabilities, or Ambient capabilities. I've done a full write up of how capability inheritance works on the libcap distribution website. If all this stuff is still confusing, the examples there might be helpful.
Reproduce setcap behavior with capsh
1,590,666,593,000
I'm trying to run openvpn server within podman unprivileged container. Openvpn needs to be able to manage network interfaces (i.e. create tun interface, assign IP address to it, bring it up). On my system (arch linux) within openvpn-server.service I noticed CapabilityBoundingSet and this made me to experiment and create my own service which instead of running openvpn will run podman run. First I created my openvpn container, below is Dockerfile (I used archlinux as base for convenience): FROM archlinux RUN pacman -Sy --noconfirm openvpn I then build this container (being logged in as my_unprivileged_user) podman build \ --force-rm \ --no-cache \ --rm \ --device=/dev/net/tun \ -t openvpn . Then I created my_custom_openvpn.service: Description=OpenVPN in Podman container After=syslog.target network-online.target Wants=network-online.target [Service] User=my_unprivileged_user Group=my_unprivileged_group WorkingDirectory=/etc/openvpn ExecStart=/usr/bin/podman run --rm --name openvpn -v ./server:/server --device /dev/net/tun --network "host" --cap-add CAP_IPC_LOCK,CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETGID,CAP_SETUID,CAP_SYS_CHROOT,CAP_DAC_OVERRIDE,CAP_AUDIT_WRITE localhost/openvpn:latest /usr/bin/openvpn --config /server/my_config.conf ExecStop=/usr/bin/podman stop -t 0 openvpn Capabilities=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_DAC_OVERRIDE CAP_AUDIT_WRITE DeviceAllow=/dev/null rw DeviceAllow=/dev/net/tun rw #ProtectSystem=true #ProtectHome=true RestartSec=5s Restart=on-failure TimeoutSec=5s [Install] WantedBy=multi-user.target So I thought systemd will pass capabilities to podman, which in turn will pass them further down to openvpn. But openvpn fails to start complaining it cannot create tun0 interface. Even if I create tun0 myself like this openvpn --mktun --dev tun0 I get another error that openvpn cannot set this tun0 interface up. I thought maybe I need to do setcap within the container, so I have podman exec into it and executed below: setcap CAP_IPC_LOCK,CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETGID,CAP_SETUID,CAP_SYS_CHROOT,CAP_DAC_OVERRIDE,CAP_AUDIT_WRITE=+ep /usr/bin/openvpn But this did not help. I keep getting this error: Tue Jan 28 13:34:31 2020 /usr/bin/ip link set dev tun0 up mtu 1500 RTNETLINK answers: Operation not permitted Maybe trying to use capabilities like this does not make sense?
I managed to get openvpn working by replacing ip within the container with bash script that always returns 0. I figured the only thing that openvpn tries to do is to set up tun0 and then assign it the ip address and bring it up. I decided to do this manually from the outside of container (as root) and so openvpn does not have to do that. I described the procedure on openvpn wiki here
Run openvpn as non-root user
1,590,666,593,000
I'm exploring the namespace feature of linux kernel, using Archlinux. But I got some message that I can't explain the reason, could anyone explain them to me? xtricman⚓ArchVirtual⏺️~🤐export LANG=en_US.UTF-8 xtricman⚓ArchVirtual⏺️~🤐unshare --propagation private -r bash Could not get property: Access denied root⚓⏺️~🤐mount -o remount,ro / mount: /: permission denied. Based on ArchWiki, I CAN create an user namespace using my normal account, and I do, but Why do I get the Could not get property: Access denied message? Based on manpage, Newly created bash process has full capability in the new namespace, so why do I get the "permission denied" message when I tried to do mount? Is there anything related with file capability? How can I check the current capabilities the current bash process have?
The command you are trying to run would change the root filesystem to read-only. It would affect outside the namespace as well. So you do not have permission. You only want to change one specific mount, the mount inside the namespace. Use this command: mount -o remount,bind,ro /
Why do I get permission denied when using unshare?
1,590,666,593,000
I have this C code that runs smartctl command and takes its output: #include <iostream> #include <cstdio> #include <cstdlib> using namespace std; int main() { cout << "Hello ! " << endl; FILE *fp; char path[1035]; /* Open the command for reading. */ fp = popen("smartctl -A /dev/sda", "r"); if (fp == NULL) { printf("Failed to run command\n" ); exit(1); } /* Read the output a line at a time - output it. */ while (fgets(path, sizeof(path), fp) != NULL) { printf("%s", path); } /* close */ pclose(fp); return 0; } The machine is openSUSE 13.2. My problem is it requires root privilege, but I don't want to switch to root. What I tried and did not work: Added CAP_SYS_ADMIN capability to the program executable. Added CAP_DAC_OVERRIDE to the smartctl executable (as suggested here) and I also added it to the program executable. I used setcap 'cap_dac_override=+ep' cpptest to add the capability (also for the CAP_SYS_Admin) but when I use getcap I get only one of them, not both. (Can you help me in this ?) I changed the permission of /dev/sda to 755, 777, 766 and changed ownership of the device to my group using chown root:users /dev/sda. I wrote a program that performs some network operations using sockets and it required root privilege but I added CAP_NET_ADMIN and it worked. I just don't know why capabilities does not work. This is really important issue. Any help is really appreciated. Please note: I provided a C code but this is an issue with Linux not a programming problem.
According to this discussion CAP_SYS_RAWIO capability needs to be applied to smartctl executable.
How to run smartctl as root without switching to root?
1,590,666,593,000
I am working on an embedded system device which basically has root user. I have a systemd service call.service which works fine with root access. The service basically creates a few sockets and then interacts with the network device. I want to launch this service with user UserA, and capabilities like net_raw and net_admin. I have written the following unit file: file: /etc/systemd/system/multi-user.target.wants/call.service [Unit] Description=XXX call service After=network-online.target Wants=network-online.target [Service] Type=simple User=userA Group=userA ExecStart=/opt/call/bin/call eth0 -P -1 CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_RAW ExecStartPre=/bin/mkdir -p /tmp/call ExecStartPre=/bin/chmod -R 755 /tmp/call ExecStopPost=/bin/rm -rf /tmp/call [Install] WantedBy=multi-user.target However, when I launch this service, the service fails with an error stating that during socket creation "Operation Not Permitted". $ systemctl restart call Dec 01 17:56:10 xxxx call[26955]: ERROR : CALL [17:56:10:682] socket creation failed: Operation not permitted Corresponding src file for the error: //file call.cpp net_iface_l->sd_general = socket( PF_PACKET, SOCK_DGRAM, 0 ); if( net_iface_l->sd_general == -1 ) { LOG_ERROR( "socket creation failed: %s", strerror(errno)); return false; } Can someone point out, if there is a mistake in the user creation or capabilities defined? May be something is missing in terms of the user permissions here, which I am unable to understand.
does the trick with the following line. AmbientCapabilities=CAP_NET_ADMIN CAP_NET_RAW
Assign capability to systemd service and specific user
1,395,769,777,000
I'm struggling with cpupower on ArchLinux. I want to set governor to ondemand or even to conservative. First if I do $ sudo cpupower frequency-info --governors, I only get performance powersave. So I look for available modules like this ls -1 /lib/modules/`uname -r`/kernel/drivers/cpufreq/ ...and I get acpi-cpufreq.ko.gz amd_freq_sensitivity.ko.gz cpufreq_conservative.ko.gz cpufreq_powersave.ko.gz cpufreq_stats.ko.gz cpufreq_userspace.ko.gz p4-clockmod.ko.gz pcc-cpufreq.ko.gz powernow-k8.ko.gz speedstep-lib.ko.gz So, first of all no modules for "ondemand" seems to be available. What do I miss? Then I try to enable at least conservative: $ sudo modprobe cpufreq_conservative then I check the module is actually loaded $ lsmod | grep cpufreq and check if it is now avaliable $ sudo cpupower frequency-info --governors but unfortunately I still get the same: performance powersave only, and if I try to enable conservative $ sudo cpupower frequency-set -g conservative It says that the module is not avaliable. So basically I have two questions: What do I need to install in order to have ondemand module How can I enable it?
Assuming your governor is the intel_pstate (default for Intel Sandy Bridge and Ivy Bridge CPUs as of kernel 3.9). This issue is not specific to Arch, but all distros using the new Intel pstate driver for managing CPU frequency/power management. See Arch Linux CPU frequency scaling. Theodore Ts'o wrote his explanation on Google+: intel_pstate can be disabled at boot-time with kernel arg intel_pstate=disable The problem with the ondemand governor is that it doesn't know the specific capabilities of the CPU Executing some tasks with higher frequency will consume less power than would a lower frequency taking more time, e.g., arithmetic stuff, but not true for all tasks, e.g., loading something from memory The intel_pstate driver knows the details of the how the CPU works and it does a better job than the generic ACPI solution intel_pstate offers only two governors, powersave and performance. Intel claims that the intel_pstate "powersave" is faster than the generic acpi governor with "performance" To change back to the ACPI driver, reboot and set the kernel arg intel_pstate=disable Then execute modprobe acpi-cpufreq and you should have the ondemand governor available. You can make the changes permanent by editing /etc/default/grub and adding GRUB_CMDLINE_LINUX_DEFAULT="intel_pstate=disable" And then updating grub.cfg ala grub-mkconfig -o /boot/grub/grub.cfg Follow the instructions for Arch kernel module loading and add the acpi-cpufreq module.
Setting CPU governor to on demand or conservative
1,395,769,777,000
I'm trying to change the cpu frequency on my laptop (running Linux), and not having any success. Here are some details: # uname -a Linux yoga 3.12.21-gentoo-r1 #4 SMP Thu Jul 10 17:32:31 HKT 2014 x86_64 Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz GenuineIntel GNU/Linux # cpufreq-info cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 0.97 ms. hardware limits: 800 MHz - 2.60 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 800 MHz and 2.60 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 2.42 GHz (asserted by call to hardware). (similar information for cpus 1, 2 and 3) # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors performance powersave I initially had the userspace governor built into the kernel, but then I also tried building it as a module (with the same results); it was loaded while running the above commands (and I couldn't find any system messages when loading it): # lsmod Module Size Used by cpufreq_userspace 1525 0 (some other modules) And here are the commands I tried for changing the frequency: # cpufreq-set -f 800MHz Error setting new values. Common errors: - Do you have proper administration rights? (super-user?) - Is the governor you requested available and modprobed? - Trying to set an invalid policy? - Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded? # cpufreq-set -g userspace Error setting new values. Common errors: - Do you have proper administration rights? (super-user?) - Is the governor you requested available and modprobed? - Trying to set an invalid policy? - Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded? Any ideas?
This is because your system is using the new driver called intel_pstate. There are only two governors available when using this driver: powersave and performance. The userspace governor is only available with the older acpi-cpufreq driver (which will be automatically used if you disable intel_pstate at boot time; you then set the governor/frequency with cpupower): disable the current driver: add intel_pstate=disable to your kernel boot line boot, then load the userspace module: modprobe cpufreq_userspace set the governor: cpupower frequency-set --governor userspace set the frequency: cpupower --cpu all frequency-set --freq 800MHz
Can't use "userspace" cpufreq governor and set cpu frequency
1,395,769,777,000
When I do sudo watch -n1 cat /sys/devices/system/cpu/cpu*/cpufreq/cpuinfo_cur_freq I get 1.8 - 2.7 GHz. It never goes above 2.7. And when I do watch -n1 "cat /proc/cpuinfo | grep MHz" I get 768 MHz - 1.8 GHz. It never goes above 1.8. Anyone know what is going on?
Most CPU's now include the ability to adjust their speed to help in saving on battery/power usage. It's typically called CPU frequency scaling. The realtime speed of the CPU is reported by this: $ sudo cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq The absolute (max) CPU speed is being reported by this: $ cat /proc/cpuinfo Specifically this line: model name : Intel(R) Core(TM) i5 CPU       M 560  @ 2.67GHz The line that shows cpu MHz doesn't show the maximum speed of your CPU. This value is your current speed. On a multi-core system such as an i7 or i5 you can see this with this command: $ cat /proc/cpuinfo |grep MHz cpu MHz : 1199.000 cpu MHz : 1199.000 cpu MHz : 1199.000 cpu MHz : 2667.000 You can however see the absolute (max) speed with this command: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit CPU(s): 4 Thread(s) per core: 2 Core(s) per socket: 2 CPU socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 37 Stepping: 5 CPU MHz: 2667.000 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 3072K NUMA node0 CPU(s): 0-3 NOTE: the number of cores that it has, NUMAS node0 CPU(s) is 4, i.e. 0,1,2, and 3. CPU scaling & governoring? The mode your system is in is called the scaling governor. Similar to a governor on a car. You can see which ones are available with this command: $ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors powersave ondemand userspace performance You can also see which one is currently active: $ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ondemand NOTE: The commands I'm showing only include the 1st cpu, cpu0. You can either substitute in a * in the path to see all the cores or you can selectively see cpu1, etc. You can see the maximum & minimum CPU speeds available for your governor's profile: $ sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq 2667000 $ sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq 1199000 More details are available in this article, titled: CPU frequency scaling in Linux with cpufreq. So what about cpuinfo_cur_freq? This parameter has more to do with the specification of the CPU and which profile it's currently in, rather than anything useful with respect to how the CPU is currently operating. For actual operational telemetry I'd use the scaling_* kernel tunables. Example I put the following script together to show the CPU Cores column-wise so it would be easier to see what the various Kernel tunables looked like: #!/bin/bash nthCore=$(lscpu|grep node0|cut -d"-" -f2) for i in /sys/devices/system/cpu/cpu0/cpufreq/{cpuinfo,scaling}_*; do pname=$(basename $i) [[ "$pname" == *available* ]] || [[ "$pname" == *transition* ]] || \ [[ "$pname" == *driver* ]] || [[ "$pname" == *setspeed* ]] && continue echo "$pname: " for j in `seq 0 $nthCore`;do kparam=$(echo $i | sed "s/cpu0/cpu$j/") sudo cat $kparam done done | paste - - - - - | column -t When you run it you get the following output: $ ./cpuinfo.bash cpuinfo_cur_freq: 2667000 2667000 2667000 2667000 cpuinfo_max_freq: 2667000 2667000 2667000 2667000 cpuinfo_min_freq: 1199000 1199000 1199000 1199000 scaling_cur_freq: 2667000 2266000 1333000 2667000 scaling_governor: ondemand ondemand ondemand ondemand scaling_max_freq: 2667000 2667000 2667000 2667000 scaling_min_freq: 1199000 1199000 1199000 1199000 You can see that the scaling_cur_freq tunable is showing a slowdown in core # 1 & 2.
Why do cpuinfo_cur_freq and /proc/cpuinfo report different numbers?