output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
I created a short script to do this, with a nice output which should be easy check the results. It doesn't need to have the destination directory structure created. Use as follows: $ ./recursive-symlink.sh --help Usage: ./recursive-symlink.sh <source_path> <dest_path> <find_args...>To show its usage, let's say I have the following files/dirs at the begining: β”œβ”€β”€ recursive-symlink.sh* └── src/ β”œβ”€β”€ dir1/ β”‚ β”œβ”€β”€ file_A_misc.txt β”‚ └── file_B_sub.txt β”œβ”€β”€ dir3/ β”‚ β”œβ”€β”€ file_A3.txt β”‚ β”œβ”€β”€ file_C.txt β”‚ └── subsub_dir/ β”‚ β”œβ”€β”€ file_Asubsub.txt β”‚ └── file_D.txt β”œβ”€β”€ dir_A/ β”‚ └── should_be_empty.dat β”œβ”€β”€ file_A.txt └── file_B.txtIf I run: $ find -name '*_A*' ./src/file_A.txt ./src/dir3/file_A3.txt ./src/dir3/subsub_dir/file_Asubsub.txt ./src/dir_A ./src/dir1/file_A_misc.txtI can see which files would be linked. I then run the script like this: $ ./recursive-symlink.sh src/ dest/ -name '*_A*' src/file_A.txt mkdir: created directory 'dest' 'dest/file_A.txt' -> '../src/file_A.txt'src/dir3/file_A3.txt mkdir: created directory 'dest/dir3' 'dest/dir3/file_A3.txt' -> '../../src/dir3/file_A3.txt'src/dir3/subsub_dir/file_Asubsub.txt mkdir: created directory 'dest/dir3/subsub_dir' 'dest/dir3/subsub_dir/file_Asubsub.txt' -> '../../../src/dir3/subsub_dir/file_Asubsub.txt'src/dir_A 'dest/dir_A' -> '../src/dir_A'src/dir1/file_A_misc.txt mkdir: created directory 'dest/dir1' 'dest/dir1/file_A_misc.txt' -> '../../src/dir1/file_A_misc.txt'My final state will be then: β”œβ”€β”€ recursive-symlink.sh* β”œβ”€β”€ src/ β”‚ β”œβ”€β”€ dir1/ β”‚ β”‚ β”œβ”€β”€ file_A_misc.txt β”‚ β”‚ └── file_B_sub.txt β”‚ β”œβ”€β”€ dir3/ β”‚ β”‚ β”œβ”€β”€ file_A3.txt β”‚ β”‚ β”œβ”€β”€ file_C.txt β”‚ β”‚ └── subsub_dir/ β”‚ β”‚ β”œβ”€β”€ file_Asubsub.txt β”‚ β”‚ └── file_D.txt β”‚ β”œβ”€β”€ dir_A/ β”‚ β”‚ └── should_be_empty.dat β”‚ β”œβ”€β”€ file_A.txt β”‚ └── file_B.txt └── dest/ β”œβ”€β”€ dir1/ β”‚ └── file_A_misc.txt -> ../../src/dir1/file_A_misc.txt β”œβ”€β”€ dir3/ β”‚ β”œβ”€β”€ file_A3.txt -> ../../src/dir3/file_A3.txt β”‚ └── subsub_dir/ β”‚ └── file_Asubsub.txt -> ../../../src/dir3/subsub_dir/file_Asubsub.txt β”œβ”€β”€ dir_A -> ../src/dir_A/ └── file_A.txt -> ../src/file_A.txtYou can see that dest directory is created automatically, as well all recursive subdirectories, and on the dest dir, only the files that matched the *_A* pattern were linked.Here the script source code: #!/bin/bashverbose='-v' # you may comment this lineif [ "$1" == '-h' ] || [ "$1" == '--help' ] || [ $# -lt 3 ] then echo "Usage:" echo " $0 <source_path> <dest_path> <find_args...>" exit fisrc="${1%/}" ; shift dest="${1%/}" ; shift relflag='' ; [ "${src:0:1}" != '/' ] && relflag='-r'find "$src" \( "$@" \) -print0 | while IFS= read -r -d '' f do base_fname="${f#$src}" [ "$verbose" ] && echo "${f}" dest_ln="$dest/${base_fname#/}" dest_dir="$(dirname "$dest_ln")" mkdir -p $verbose "$dest_dir" ln $relflag -s $verbose -t "$dest_dir" "$f" [ "$verbose" ] && echo done
I am trying to symlink a set of specific files from a project I'm working on. There is a known string in each of the filenames I wish to symlink. Here is what I have so far: ln -s find ~/path/to/src/ -name "*stringtomatch*" find ~/path/to/destI have the directory structure setup in the destination to match the source, but it's just directories, so I don't mind deleting those if it's easier to write a command that for an empty destination. Update: I have now accepted a working answer and I want to share some context so that a similar use case might find a solution more easily. I perform the majority of my coding in Netbeans. When I am building packages for a project I tend to name all of the associated files so that they have part of the file name in common. This allows me to easily find my own package files as I move around within the project. However in my current project this is very time consuming due to the large volume of files and directories involved. What I have now is a separate project defined for each of my own packages which show only the files for that package while maintaining the master project hierarchy. By building separate package projects that use symlinks to my package files within the master project, I have effectively created what I believe to be the perfect solution where there doesn't seem to be one available within the Netbeans IDE in its current form. Each sub-project does nothing but allow me to work on a subset of files relating only to itself which really makes my time at the keyboard more efficient. I believe Eclipse has this feature in-built, though I do not have Eclipse. So, albeit a compromise I believe that this workaround for Netbeans is as clean a solution I could achieve today. It's a huge bonus that it works as well, if not better than I had anticipated. I had expected to run manual synchronisations of the master after edits to the sub-projects. This is not the case, the master still maintains automatic synchronisation.
Recursively "find" file names containing string and symlink files in another directory
From the ln man page:When creating hard links, each TARGET must exist.No mention of symlinks there; in fact, this statement seems to imply that this is not the case for symlinks. As I said in my comment on your question, when creating a symlink to a non-existent source, a broken link is created: $ ln -sfv blah blabla 'blabla' -> 'blah' $ file blabla blabla: broken symbolic link to 'blah'As far as ln is concerned, there's no reason to cry error: you asked for a symlink and it obliged. Shy of aliasing ln, I don't see a way to do what you want without explicitly checking for the existence of the source file.
ln -sf source_file target_file succeeds even when source_file does not exist. ln -f source_file target_file on the other hand fail, as expected. How can it be tuned to give error on the first case without first testing for the file existence explicitly (i.e. not [[ -e source_file ]] && ln -sf source_file target_file)
Why does ln -sf silently fail?
It seems the setting backupcopy is auto (run :set backupcopy? in Vim to confirm).The main values are: yes make a copy of the file and overwrite the original one no rename the file and write a new one auto one of the previous, what works best […] The auto value is the middle way: When Vim sees that renaming file is possible without side effects (the attributes can be passed on and the file is not a link) that is used. When problems are expected, a copy will be made.In case it's not clear: yes (copy and overwrite) does not change the inode number, no (rename and write anew) does change it. In your case at first auto was like no. After ln ./11.cpp ./xxx Vim noticed there is another link and auto worked like yes.
I use Vim 8.2 to edit my files in my Ubuntu 18.04. When I open a file, do some changes and quit with Vim, the inode number of this file will be changed. As my understanding, it's because the backup mechanism of my Vim is enabled, so each edition will create a new file (.swp file) to replace the old one. A new file has a new inode number. That's it. But I found something weird. As you can see as below, after the first vim 11.cpp, the inode has changed, 409980 became 409978. However, after creating a hard link for the file 11.cpp, no matter how I modify the file 11.cpp with my Vim, its inode number won't change anymore. And if I delete the hard link xxx, its inode number will be changed by each edition of my Vim again. This really makes me confused. $ ll -i ./11.cpp 409980 -rw-rw-r-- 1 zyh zyh 504 Dec 22 17:23 ./11.cpp$ vim 11.cpp # append a string "abc" to the file 11.cpp $ ll -i ./11.cpp 409978 -rw-rw-r-- 1 zyh zyh 508 Dec 22 17:25 ./11.cpp$ vim ./11.cpp # remove the appended "abc" $ ll -i ./11.cpp 409980 -rw-rw-r-- 1 zyh zyh 504 Dec 22 17:26 ./11.cpp$ ln ./11.cpp ./xxx # create a hard link $ ll -i ./11.cpp 409980 -rw-rw-r-- 2 zyh zyh 504 Dec 22 17:26 ./11.cpp$ vim 11.cpp # append a string "abc" to the file 11.cpp $ ll -i ./11.cpp 409980 -rw-rw-r-- 2 zyh zyh 508 Dec 22 17:26 ./11.cpp$ vim 11.cpp # remove the appended "abc" $ ll -i ./11.cpp 409980 -rw-rw-r-- 2 zyh zyh 504 Dec 22 17:26 ./11.cpp
Why didn't inode change anymore with a hard link
If you have GNU or BSD find, this should do it: find -lname '/home/Steven/*' -delete
I have an "install.sh" that installs my personal scripts: find /home/Steven -name '*.sh' -exec ln -s -t /usr/local/bin {} +I would like to make an "uninstall.sh" that removes the symbolic links created by "install.sh". I wrote this: for z in /usr/local/bin/* do if [ -h "$z" ] then rm "$z" fi donebut it removes all symlinks, not just ones where the target is under "/home/Steven".
Remove symlinks originating from specific directory
A plain ln -s treats its target as a string. It doesn't care whether that string happens to be a path to an existing file. The GNU extension ln -sr treats its target as a filesystem location. It needs to dereference directories in the target, because otherwise the resulting actual target would be wrong: mkdir /tmp/demo /tmp/foreign cd /tmp/demo mkdir child ln -s -r foo child/foo # -> ../foo ln -s /tmp/foreign elsewhere ln -s -r foo elsewhere/foo # -> ../demo/foo Β­Β­β€” not ../fooThis could have been implemented without dereferencing the final component of the path, but it wasn't. The manual explains how to obtain a link that's relative to the given directory. Instead of using ln -r, call realpath (which is also part of GNU coreutils) to construct the relative path. The invocation is different depending on whether you're passing the link name or the name of a directory where the link will be created. If $dir is a directory: ln -s -- "$(realpath -m --relative-to "$dir" -- "$target")" "$dir"If $linkname is the path to the link to create: ln -s -- "$(realpath -m --relative-to "$(dirname -- "$linkname")" -- "$target")" "$linkname"
I want to create a relative symlink pointing to a relative symlink, not to the relative symlink's target. It seems that when creating a relative symlink ln resolves TARGET instead of pointing to the actual TARGET. Can this process be skipped? In the example below, I would want rsym-to-existing-rsym.txt to point to rsym-to-target.txt, not to target.txt. When I create a symlink to not-yet-existing symlink it works (rsym-to-future-rsym.txt -> future-rsym-to-target.txt -> target.txt chain in the example below). Example # Create initial structure $ mkdir -p temp $ touch temp/target.txt# Now, create relative symlinks $ ln -s -r temp/target.txt temp/rsym-to-target.txt $ ln -s -r temp/rsym-to-target.txt temp/rsym-to-existing-rsym.txt # Note that at this point temp/future-rsym-to-target.txt doesn't exist yet $ ln -s -r temp/future-rsym-to-target.txt temp/rsym-to-future-rsym.txt $ ln -s -r temp/target.txt temp/future-rsym-to-target.txt$ tree -F temp/ # Note that rsym-to-existing-rsym.txt points to target.txt, not to rsym-to-target.txt temp/ β”œβ”€β”€ future-rsym-to-target.txt -> target.txt β”œβ”€β”€ rsym-to-existing-rsym.txt -> target.txt β”œβ”€β”€ rsym-to-future-rsym.txt -> future-rsym-to-target.txt β”œβ”€β”€ rsym-to-target.txt -> target.txt └── target.txt# Maybe the tools I'm using are just nice to me and resolve final paths? # No, when I delete rsym-to-target.txt symlink rsym-to-existing-rsym.txt still points to target.txt $ rm temp/rsym-to-target.txt tree -F temp/ temp/ β”œβ”€β”€ future-rsym-to-target.txt -> target.txt β”œβ”€β”€ rsym-to-existing-rsym.txt -> target.txt β”œβ”€β”€ rsym-to-future-rsym.txt -> future-rsym-to-target.txt └── target.txt
ln: create relative symlink to a relative symlink
I hope you have a backup of dummy1! From the man page for ln: -f, --force remove existing destination filesSo dummy1 has been removed and replaced by the symlink. If you want to prevent this in the future, do not use the -f flag to ln.
So there was a file dummy1. I created a symlink: ln -snf dummy dummy1and confused source and target file, as I actually wanted dummy to point to dummy1 not vice versa. So now dummy1 is a symlink. Is the original dummy1 file removed by doing this? Any way to get it back? I would expect to get it back somehow, because otherwise that would be strange, as even rm command asks for confirmation. Thank you
Is existing file removed when a symlink is created with the same name?
It's different with and without -s. With -s in: ln -s path/to/file some/dir/linkpath/to/file is set as the symlink target of some/dir/link (or some/dir/link/file if link was a directory). A symlink is a special type of file which /contains/ a path (can be any array of non-0 bytes, some systems even allow an empty string) which is the target of the symlink. ln sets it to the first argument. Upon resolving the link (when the link is used later on), the path/to/file path will be relative to the directory the link is (hard-)linked to (some/dir here when some/dir/link is accessed via its some/dir/link path). Note that the path/to/file doesn't need to exist at the time the ln command is run (or ever). While in: ln path/to/file some/dir/linkIt's similar to: cp path/to/file some/dir/linkThe path/to/file is relative to the current working directory of the process running ln. Nothing stops you from creating several (hard-)links to a symlink. For example: $ mkdir -p a b/c b/a $ ln -s ../a b/L # b/L a symlink to "a" $ ln b/L b/c/Lb/L and b/c/L are the same file: same inode but linked to two different directories. They are both symlinks with target ../a. But when b/L is resolved, that points to ./a while when b/c/L, that points to ./b/a.
How does the relative paths work in ln (-s or not)? For example if I type ln -s foo bar/banana.txt what does this mean? What is foo relative to? Because it doesn't seem to be relative to the current path. Also is it different if I remove -s or not? I've tested it out and the result doesn't make sense to me, and the man page doesn't seem to explain this. Could anyone explain?
Paths in ln with hard links and soft links
libgfortran.so.3 from Fedora 9 : provides.log β†’ libgfortran.so.3(GFORTRAN_1.0)(64bit) libgfortran = 4.3.0-8The original package libgfortran-4.3.0-8.x86_64.rpm will conflict, if any fortran dependent applications are installed (e.g. 'openblas-thread'), so a rebuild to a new name is required. compat-libgfortran-4.3.0-8.fc27.x86_64.rpm installs with no issues. Link β†’ https://drive.google.com/file/d/18uMtX2n4-bwM2V2TfOl-w_Fk8t6YSlsk/view?usp=sharing Install: # cd Downloads/ && yum install ./compat-libgfortran-4.3.0-8.fc27.x86_64.rpm P.S.: The objects GFORTRAN_1.0), GFORTRAN_1.4) are also present in later versions, till v.6.x : Fedora 24 β†’ v. 6.3.1 , "compat" package = compat-libgfortran-6.3.1-1.fc27.x86_64.rpm : Updates the previous installed compat-libgfortran. Link https://drive.google.com/file/d/1f9nPFjuMBGg1XIza_Ajokkm_d7VYmF0_/view?usp=sharingdescribe how you built the renamed packagesWrite a new spec file ( I used pkgtool2 to create compat-libgfortran.spec https://drive.google.com/file/d/0B7S255p3kFXNQ0ZEbHB1V1BUa0E/view?usp=sharing ) Summary: None Name: compat-libgfortran Version: 6.3.1 Release: 1.fc27 License: GPL Group: None Packager: Jerry Donut <[emailprotected]> BuildArchitectures: x86_64 BuildRoot:%description No description%files /usr/lib64/libgfortran.so.3 /usr/lib64/libgfortran.so.3.0.0Copy compat-libgfortran.spec to /home/[name]/rpms/SPECS/ https://www.linuxquestions.org/questions/linux-software-2/need-rpm-package-for-php-version-5-2-7-and-up-on-redhat-5-1-a-766486/#13 ... and run $ rpmbuild -bb compat-libgfortran.spec
I am trying to use fortran code called SAMMY-8 which has its binary ready for use. I was using without any issue while I had f25 installed. When upgrading to f27 I got the following error when trying to run the softwaresammy: error while loading shared libraries: libgfortran.so.3: cannot open shared object file: No such file or directoryAt first I though of making a soft link to libgfortran.so.4 by using ln -s /usr/lib/libgfortran.so.4 /usr/lib/libgfortran.so.3 but when trying to run the code I got sammy: /lib64/libgfortran.so.3: version `GFORTRAN_1.0' not found (required by sammy) sammy: /lib64/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by sammy)I also tried to install gcc-4.9.2 by installing the following rpm filesdevtoolset-3-gcc-4.9.2-6.2.el7.x86_64.rpm devtoolset-3-gcc-c++-4.9.2-6.2.el7.x86_64.rpm devtoolset-3-libstdc++-devel-4.9.2-6.2.el7.x86_64.rpm devtoolset-3-runtime-3.1-12.el7.x86_64.rpmThe installation was successful so I typed scl enable devtoolset-3 bash in order to be able to use gcc-4.9.2 and then run SAMMY again, but I still getsammy: /lib64/libgfortran.so.3: version `GFORTRAN_1.0' not found (required by sammy) sammy: /lib64/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by sammy)Any idea on how to GFORTRAN_1.0 and GFORTRAN_1.4 on f27?
Use libgfrotran.so.3 and GFORTRAN_1.0 on fedora 27
Two possible solutions that spring to mind. 1. Iterate across all the directories in LOREM and symlink them to $HOME cd "$HOME/LOREM" for item in * do test -d "$item" || continue mv -f "$HOME/$item" "$HOME/$item.DELETE_ME_LATER" 2>/dev/null ln -s "$HOME/LOREM/$item" "$HOME/$item" done# Once you are happy that only the correct files have been replaced # rm "$HOME"/*.DELETE_ME_LATERYou can prefix the rm and ln with echo (eg echo rm -f "Users/masi/$item") to see the effect of the script before it makes any changes 2. Process the set of existing files and convert them to proper symlinks This one will need some heuristics (guesswork), because there is nothing concrete that identifies a file-that-should-be-a-symlink. Something like this might work for file in * do # Skip files that we have already processed [[ $file =~ DELETE_ME_LATER ]] && continue # Look for a path-like string in the file path=$(grep "^$HOME/" "$file") if test -d "$path" then # It is a directory mv -f "$file" "$file.DELETE_ME_LATER" ln -s "$HOME/LOREM/$file" "$file" fi done# Once you are happy that only the correct files have been replaced # rm *.DELETE_ME_LATERAgain, you can prefix the mv and ln statements with echo to see the effect without applying any changes.
I moved my filesystem and symlinks from Ubuntu 14.04 to 16.04 by using FAT32 memory card, which apparently broke those links; stopped using BitTorrentSync. Differential condition is that those links are remnants of my OSX installation because of XSym. I do ls -la $HOME | grep Math for a symlink -rw-r--r-- 1 masi masi 1067 May 17 21:28 Mathwhich contents in the text-editor XSym 0078 48055bd2d9c13568c969e1eb8d6a22ac /Users/masi/Math/It should point to /Users/masi/LOREM/Math/ instead. Just correcting manually the PATH does not work, since the link stays death. Gilles' command can be applicable here too: find /Users/masi/Math/ / -lname '/Users/masi/LOREM/Math/*' \ -exec sh -c 'ln -snf "/mnt$(readlink "$0")" "$0"' {} \;where I am not sure if I got the source and destination in the correct order. Systems: Ubuntu 14.04, Ubuntu 16.04
How to Relink these symlinks after moving the system?
Found it:If sysctl fs/protected_hardlinks is set, hard links by someone not the owner (and without CAP_FOWNER), must be:not special not setuid not executable setgid both readable and writableaccording to fs/namei.c. Some guy on SO wanted to have a dropbox folder people could add to but not see into (I think that's a Windows feature), I figured this was one of the few places a setgid would be good and the smoketest drove me here. Thanks to all and especially Anthon who suggested checking the source. (edit: sysctl spelling)
I've checked the manpages, the mount, the permissions ... (edit: combined history into one sequence as requested. Starting to seem a not-simple problem. Nothing new since last edit, just bundled up all pretty) ~/sandbox/6$ editfunc doit ~/sandbox/6$ -x doit + doit + find . + cp /bin/ln /bin/id . + sudo chown jthill:jthill id ln + chmod g+s id ln + mkdir protected + chmod 770 protected + touch data + set +xv ~/sandbox/6$ ls -A data id ln protected ~/sandbox/6$ ls -Al total 92 -rw-r--r-- 1 jthill jthill 0 Nov 8 02:39 data -rwxr-sr-x 1 jthill jthill 31432 Nov 8 02:39 id -rwxr-sr-x 1 jthill jthill 56112 Nov 8 02:39 ln drwxrwx--- 2 jthill jthill 4096 Nov 8 02:39 protected ~/sandbox/6$ sudo su nobody [nobody@home 6]$ ./id uid=619(nobody) gid=617(nobody) egid=1000(jthill) groups=617(nobody) [nobody@home 6]$ ./ln ln protected ./ln: failed to create hard link β€˜protected/ln’ => β€˜ln’: Operation not permitted [nobody@home 6]$ ./ln data protected ./ln: failed to create hard link β€˜protected/data’ => β€˜data’: Operation not permitted [nobody@home 6]$ ln ln protected ln: failed to create hard link β€˜protected/ln’ => β€˜ln’: Permission denied [nobody@home 6]$ ln data protected ln: failed to create hard link β€˜protected/data’ => β€˜data’: Permission denied [nobody@home 6]$ exit ~/sandbox/6$
setgid binary doesn't have permission, mount's right, I'm missing something, but what, please?
The code ln /etc/setuid_script -i is intended to create a hardlink to a file called -i in the current directory. You might need to say ln -- /etc/setuid_script -i to make this work if you are using GNU tools. The shell can get commands to run in 3 different ways.From a string. Use sh -c "mkdir /tmp/me" with the -c flag. From a file. Use sh filename From the terminal, use sh -i or sh.Historically when you have a shell script called foo starting with #!/bin/sh the kernel invokes it with a filename, i.e. /bin/sh foo, to tell it to use the 2nd way of reading commands. If you give it a filename of -i then the kernel invokes /bin/sh -i and you get the third way. There are also race conditions. This was exploited thus.The exec system call is invoked to start the script. The kernel sees the file is SUID, and sets the permissions of the process accordingly. The kernel reads the first few bytes of the file to see what kind of executable it is, finds the #!/bin/sh and so sees it is a script for /bin/sh. The attacker replaces the script. The kernel replaces the current process with /bin/sh. The /bin/sh opens the filename and executes the commands.This is a classic TOCTTOU (time of check to time of use) attack. The check in step 2 is against a different file to the one used (in the open call) in step 6. Both these bugs are usually fixed these days.
I was making a Bash script with the setuid permission on, but it didn't work. So I found my solution here:Why does setuid not work? and Allow setuid on shell scriptsNow my script works fine and all (I rewrote it in cpp). To satisfy my curiosity as to why pure Bash shell didn't work, I read this link: http://www.faqs.org/faqs/unix-faq/faq/part4/section-7.html (referenced by this answer: https://unix.stackexchange.com/a/2910). At that site, I came across the following: $ echo \#\!\/bin\/sh > /etc/setuid_script $ chmod 4755 /etc/setuid_script $ cd /tmp $ ln /etc/setuid_script -i $ PATH=. $ -iI don't understand the fourth line, which reads ln /etc/setuid_script -i. What does that command do? I've read in the ln manual that -i is just the "interactive" flag (asking whether you want to overwrite an existing file or not). So why does ln /etc/setuid_script -i followed by PATH=. and -i make my shell execute /bin/sh -i?
What does `ln /path/to/file -i` do in a setuid'ed script?
The -T (--no-target-directory) option to GNU ln provides a safety feature that may be useful in scripts. Suppose that you want to create a new name, $newname, for a file $filename, where the new name is maybe provided from external sources. The command ln -T "$filename" "$newname"would then fail if $newname was an already existing directory, instead of unexpectedly creating the name $filename inside that directory (which may cause further operations to fail in hilarious ways). It's a shortcut for something like if [ ! -e "$newname" ]; then ln "$filename" "$newname" else printf 'failed to create hard link "%s": File exists\n' "$newname" >&2 # Further code to handle failure to create link here. fiLikewise, the -t (--target-directory) provides a way of ensuring that the new name for the file is actually created inside an existing directory, and nowhere else. Also, as pointed out by Stephen Kitt in comments, moving the test on the filetype of the target/"link name" into the utility itself may also decrease the risk of being affected by the race condition whereby the target is changed in-between testing for its existence and/or type and actually creating the link. Why does POSIX or BSD not have -T or -t? Well, GNU tool in general have many extensions added that provide convenience. The -T and -t options to ln are some of these. They don't really let you do something that couldn't be done without them, and they don't add functionality. Some systems, like the BSDs, have not even considered adding them, or have considered but rejected the idea of adding them (I don't really know, I can't recall seeing anyone send in a patch to add it on the openbsd-tech mailing list, for example).
What does ln -T do? I know the flag does not exist in the BSD version of ln, and it only exists in the GNU version, and I have read the documentation that it will make ln "treat LINK_NAME as a normal file always", but what does that mean and why does the BSD version not have it?
What does "ln -t" do [duplicate]
Accessing a file through a symbolic link is equivalent to replacing the file's base name by the symlink text if the symlink text doesn't start with / (relative link), and to replacing the file's full path by the symlink text if the symlink starts with / (absolute link). If there's a trailing slash in the symlink text, so be it. A trailing slash in a file name means β€œthe file must be a directory”. If the target of the link is a directory, then accessing a file in it results in a calculated path that contains two slashes: one from the symlink text and one as the directory separator. Given lrwxr-xr-x 1 user wheel 15B 2 Aug 08:36 test-notail -> /ln-test/FOLDER lrwxr-xr-x 1 user wheel 16B 2 Aug 08:36 test-tail -> /ln-test/FOLDER/then test-notail/foo is equivalent to /ln-test/FOLDER/foo and test-tail/foo is equivalent to /ln-test/FOLDER//foo. Multiple slashes are as good as one (with one exception: a path that begins with exactly two slashes, on some systems). So a trailing slash (or multiple trailing slashes) in a symbolic link to a directory doesn't make a difference to the system. If an extra slash makes a difference to an application, that's a bug in the application.
I have found that when creating a symbolic link to a folder it is produced with or without the trailing slash based on your input. for example: $ ln -sfv /ln-test/FOLDER/ test-tail test-tail -> /ln-test/FOLDER/$ ln -sfv /ln-test/FOLDER test-notail test-notail -> /ln-test/FOLDER$ ll /ln-test total 16 drwxr-xr-x 2 user wheel 68B 2 Aug 08:35 FOLDER lrwxr-xr-x 1 user wheel 15B 2 Aug 08:36 test-notail -> /ln-test/FOLDER lrwxr-xr-x 1 user wheel 16B 2 Aug 08:36 test-tail -> /ln-test/FOLDER/In the example above tested on both my Mac and Debian boxes the resulting link matches the trailing slash of the input. As I understand it this should probably not matter, however recently we have run into a bug in Objective-c that is the result of having a trailing slash. We still need to track this bug down but remaking the link without the trailing slash fixes the issue. So my real question is; should it matter if a link to a directory has a trailing slash or not?
Why can a softlink to a directory be created with or without the trailing slash?
ln source target 2>/dev/null || ln -s source target 2>/dev/null || exit 1or, slightly more "interactively" (chattier), if ! ln source target 2>/dev/null; then echo 'failed to create hard link, trying symbolic link instead' >&2 if ! ln -s source target 2>/dev/null; then echo 'that failed too, bailing out' >&2 exit 1 fi fiRemove the redirections to /dev/null to see the error messages displayed by ln (if any).
I am trying to write a shell script to create link for file from my dotfiles repo to my home folder. I want to use hard link if possible because it cannot be broken when moving it to somewhere in the same filesystem with HOME. But if I clone the dotfiles to another filesystem, I have to use symlink instead.So, how to create hard link for file if possible, else use symlink in shell script?
Create hard link if possible, else use symlink
The directories /home and /tmp aren't really appropriate for this, and neither is using a symbolic link. Make a directory to store the file and set up permissions for it using an ACL. Let's say that your username is peter. Some of the commands below might be superfluous, and these are given merely to be explicit. # Make a new directory to store the `file.txt`. # sudo mkdir /var/my_dir# Change ownership and group ownership to root. # sudo chown root:root /var/my_dir# Only allow root and members of root to read the directory. # sudo chmod 0750 /var/my_dir# Begin to augment standard permissions with ACLs.# Below, allow peter rwx for all new file system objects in /var/my_dir. # (-d means "default" and -m means "mask") # setfacl -d -m u:peter:rwx /var/my_dir# Set the same mask for the directory itself. # setfacl -m u:peter:rwx /var/my_dir# Below, allow postgres r-x for all new file system objects in /var/my_dir. # setfacl -d -m u:postgres:r-x /var/my_dir# Set the same mask for the directory itself. # setfacl -m u:postgres:r-x /var/my_dirNow, peter can create files in /var/my_dir, and postgres can read them. It may also be convenient to link the directory in your home directory. cd && ln -s /var/my_dir .Files in /tmp should disappear on reboot. Generally speaking, or perhaps arguably, it would not be a good practice to link to files in your home directory. I could expound on that statement if you don't already understand. A better location for this purpose might be /usr/local/var/my_dir, but the main point is to try to get the permissions right instead of using /tmp and /home with symbolic links for this purpose. Update This might also be done in a standard, simpler way that would be more compatible with other software like SFTP/SCP clients. sudo mkdir /var/my_dir sudo chown peter:postgres /var/my_dir sudo chmod 0750 /var/my_dirNow, whatever files exist in /var/my_dir can only be read by root, peter and postgres, while only peter and root can write. Then just make sure your umask creates files that postgres can read. cd touch test ls -l testIf the result shows r for "others," then postgres will be able to read the file in /var/my_dir. Yet another approach... sudo touch /usr/local/var/file.txt sudo chown peter:postgres /usr/local/var/file.txt sudo chmod 0640 /usr/local/var/file.txt cd ln -s /usr/local/var/file.txt .Above, we work with a single file, no directories. Again, all of these are simply setting permissions. You merely have to decide how you want to approach the situation, having more knowledge about what you are doing than what we can read in the question.
Target situation:data/file.txt owned by myUser:myUser and "-rw-rw-rw-" (chmod 666) symbolic link /tmp/file.txt owned by postgres:postgres and "-rw-rw-rw-"So, I can edit the file with my user, and the other user (postgres) can read and write it also, but the link and the file are owned by different users. Real world situation: same step by step of this other question, sudo rm /tmp/file.txt # if exist, removecd ~ sudo chmod 666 data/file.txt ls -l data/file.txt # "-rw-rw-rw-" as expected more data/file.txt # working fine sudo ln -sf $PWD/data/file.txt /tmp/file.txt # fine ls -l /tmp/file.txt # "lrwxrwxrwx", /tmp/file.txt -> /home/thisUser/file.txt more /tmp/file.txt # finesudo chown -h postgres:postgres /tmp/file.txtsudo more /tmp/file.txt # NOT WORK! A workaround is sysctl -w fs.protected_symlinks=0 (then more /tmp/file.txt will work fine) but it is not secure; I need another solution.See real-life problem here
Best workaround for file's symbolic link with different group than file
It depends on where your temporary directory is. That is, have you created your own temporary directory, or are you using the system's (/tmp)? In your scenario, you are expecting the files/folder to remain after the temporary directory has been cleaned up. If it's in the system's /tmp directory then it may well be cleaned up by the system (it's distro specific, but most have a cron job or similar). Additionally, a few distros create their /tmp directory using tmpfs which means that the contents held in RAM/Swap and don't survive a reboot. The files will only remain accessible if you create a hard link. However, hard links can only be created within a single mounted filesystem. You cannot create a hard link between a tmpfs /tmp to a (eg) ext4 filesystem mounted on /mystuff. You can create a soft link from /mystuff to somewhere on a tmpfs mounted at /tmp but when the temp files are deleted the link will point to 'nowehere'; which defeats the object slightly! If your distro has it's /tmp files on a physical disk which is on the same mount as the location you plan to store your files (/mystuff), then a hard link would work as long as the link is created before the system cleans up /tmp.
I have a script that creates a temp directory using mktemp -d. A folder generated in the temp directory is the output of the script that will be copied to another part of the machine. I was considering using ln to use the same folder instead copying the contents somewhere else. I was wondering if it would still be around if the version of the folder in the /tmp directory got cleaned up by the OS?
Do Links to /tmp files get deleted?
Aliases are good for giving another name to a command, or for passing default arguments. They are not good beyond that, for example to modify an argument. Use a function instead. To support multiple file names easily, change to the target directory first. Use parentheses instead of braces to create a subshell so that the directory change does not affect the parent shell. banana () ( cd /usr/local/nagios/etc/objects/ && emacs "$@" )
I'm trying to get the following working: alias banana='emacs /usr/local/nagios/etc/objects/'So i want to just type "banana firewall.cfg" to edit this file(/usr/local/nagios/etc/objects/firewall.cfg). If i type that, emacs opens two buffers, one for "/usr/local/nagios/etc/objects" in directory edit mode, and the other just as a blank second file called firewall.cfg, which is expected, and obviously the command "bananafirewall.cfg" doesn't work. I've been scratching my head for a good 30 minutes. Is alias even the right command for this? I guess I could ln -s all these files to /root, but any other suggestions?
alias for 'emacs /usr/local/nagios/etc/objects'
What is the advantage to keep symbolic links relative to directory? Because this allows one to move the direcoty itself without breaking the symbolic links?Exactly.In addition, is it possible to create a symbolic link with .. (parent directory) in the path without being in the directory? ln -sv '/etc/init.d/rsyslog' '/etc/rc3.d/../init.d/rsyslog' obviously does not work as symlink is created into /etc/init.d/ directory while I would like to have symlink named ../init.d/rsyslog in the /etc/rc3.d/ directory.So you mean to express ln -s ../init.d/rsyslog /etc/rc3.d In order to not slip on symlinks it is best to keep in mind that ln has the semantic of ln TARGET LINK and that a symlink is in essence a file at LINK that contains TARGET; if TARGET is a relative pathname, then TARGET substitutes the symlink relative to its containing directory. Thus, ln -s X/Y A/B/ becomes A/B/X/Y.
I have a number of files in /etc/rc3.d/ directory which all are symbolic links and point to files in /etc/init.d/ directory using ../init.d/ designation. For example file S18rsyslog in /etc/rc3.d/ directory is a symbolic link to file /etc/init.d/rsyslog, but according to ls -l, /etc/rc3.d/S18rsyslog does not point to /etc/init.d/rsyslog but ../init.d/rsyslog instead: # ls -l /etc/rc3.d/S18rsyslog lrwxrwxrwx 1 root root 17 30. jun 15:05 /etc/rc3.d/S18rsyslog -> ../init.d/rsyslog # What is the advantage to keep symbolic links relative to directory? Because this allows one to move the direcoty itself without breaking the symbolic links? In addition, is it possible to create a symbolic link with ..(parent directory) in the path without being in the directory? ln -sv '/etc/init.d/rsyslog' '/etc/rc3.d/../init.d/rsyslog' obviously does not work as symlink is created into /etc/init.d/ directory while I would like to have symlink named ../init.d/rsyslog in the /etc/rc3.d/ directory.
understand the designation of symbolic links
ln without options creates a hard link as documented in the manual page for link, especially the section explaining error EXDEV, which contains the remarklink() does not work across different mount points, even if the same filesystem is mounted on bothAlthough I realize that the paragraph below does not address the problem, I won't remove it from my answer. It might still be useful for some readers. A hard link points to an inode number in the same filesystem and can therefore not be created across filesystems. You can use a symbolic link instead (-s option).
On openSUSE Tumbleweed 20210606 with kernel GNU/Linux 5.12.9-1-default I tried making a hard link of file from /cust to ~/backup: df /cust && df ~/backup && ln -P /cust/customization.tar ~/backup/and get a result with error message: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 706523136 158883972 546393196 23% / Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 706523136 158883972 546393196 23% /home ln: failed to create hard link '/home/luli/backup/customization.tar' => '/cust/customization.tar': Invalid cross-device linkWhy it said that from /dev/sda3 to /dev/sda3 is cross-device and where can I get more details? Thanks.
about ln command : condition of cross-device
This is a bug and appears in the coreutils from version 8.16 to 8.21. It was fixed in 8.22. From the release notes of version 8.22:ln --relative now updates existing symlinks correctly. Previously it based the relative link on the dereferenced path of an existing link. [This bug was introduced when --relative was added in coreutils-8.16.]https://savannah.gnu.org/forum/forum.php?forum_id=7815
I have an issue with creating symbolic links with ln, with the relative and the force flag set. The scenario is as follows: $ tree . β”œβ”€β”€ folder1 β”‚ └── file └── folder2I create the link: $ ln -sfr folder1/file folder2 $ tree . β”œβ”€β”€ folder1 β”‚ └── file └── folder2 └── file -> ../folder1/fileThis is as I want it. But when I re-execute the command, I don't understand why the link is now pointing to itself: $ ln -sfr folder1/file folder2 $ tree . β”œβ”€β”€ folder1 β”‚ └── file └── folder2 └── file -> fileExecuting the command a third time corrects the error: $ ln -sfr folder1/file folder2 $ tree . β”œβ”€β”€ folder1 β”‚ └── file └── folder2 └── file -> ../folder1/fileRe-executing the command multiple times toggles between two states. I really wonder why this is. According to the manual this should be no issue. The ln version used (as shipped with Ubuntu 14.10): $ ln --version ln (GNU coreutils) 8.21 [...]
Inconsistent behaviour creating symbolic links with relative and force flag
The proper solution to something like this in puppet is to create a defined type: define folder_link ( $link_map = $name, ) { $link_map_split = split($link_map, ':') $origin = $link_map_split[0] $link_name = $link_map_split[1] $link_path = "/folders_1_to_x/yy/$link_name" file { $link_path: ensure => link, target => $origin, } }class foo { folder_link { ["/aa/bb/folder_to_link:foo", "/cc/dd/folder_to_link:bar"]: } }This will symlink /folders_1_to_x/yy/foo to point at /aa/bb/folder_to_link, and /folders_1_to_x/yy/bar to point at /cc/dd/folder_to_link. I think it's pretty straightforward how this works, but I can clarify if needed.
I'm using Puppet and I need to create symlinks between two folders. I have around 10 folders with the same structure and I always want to execute my link command in xx/yy/zz. Something like this: ln -s aa/bb/folder_to_link folders_1_to_x/yy/link_nameI tried following these directions but had no success. Is there an easy to write command that can accomplish this?
How do I execute a command in each subdirectory? [duplicate]
There are two things to remember:ln -s (without -r) stores the target name literally as you pass it to it if you pass a relative target, it resolves relatively to the link name, not your current working directoryExample: I'm in /home/user/d0 and I want a link to /home/user/file so I do: ln -s ../file .../file is a valid path from d0. Now if there's a subdirectory d1 (/home/user/d0/d1) and I want to place a link to ../file (/home/user/file) there without changing dirs, I need to do: ln -s ../../file d1/ because the relative path needs to be relative to the link name, not my current working directory. ../../file (probably) resolves to nothing relative to d0 (unless there's a file named /home/file), so I won't get autocompletion for it which might make the operation more error prone. So I change into d1 first: cd d1 ln -s ../../file . and now ../../file makes sense relative to both the current directory and the link name; autocompletion kicks in, and I get my assurance I've got the right name. GNU ln has a --relative|-r flag which makes this easier by saving you from having to compose these relative paths manually. With it, you can use a path relative to the current directory or an absolute path, and it'll relativize it relative to the link name (as it needs to be).
From the manual of coreutils, for lnWhen creating a relative symlink in a different location than the current directory, the resolution of the symlink will be different than the resolution of the same string from the current directory. Therefore, many users prefer to first change directories to the location where the relative symlink will be created, so that tab-completion or other file resolution will find the same target as what will be placed in the symlink.The string stored in a relative symlink is determined completely by the source pathname and the target pathname, both specified as command line arguments to ln. I don't see how the current directory gets involved. So I don't quite understand the reason why many users prefer to change the current directory to the parent directory of a to-be-created relative symlink before creating it. Could you rephrase or give some examples? Thanks.
Why change the current directory to the parent directory of a relative symlink before creating it
Read your man page: Question 1 = 1st Form, this is because in linux all items are considered files, even directories. As an example, use your text editor to "open" /etc/, ie: nano -w /etc/ nano will politely tell you /etc/ is a directory Since it's technically legal to create never ending symlinks. In the old days, before the bounds check was written, I could have an FHS system with 2 files named /etc, one being a file and one being a directory, and the system knew the difference (See the haha note in the chromiumos developer guide: There is a file system loop because inside ~/trunk you will find the chroot again. Don't think about this for too long. If you try to use du -s ${HOME}/chromiumos/chroot/home you might get a message about a corrupted file system. This is nothing to worry about, and just means that your computer doesn't understand this loop either. (If you can understand this loop, try something harder. I Dare you, click on something harder :) In order to prevent the looping ln requires the full path. Question 2 can be answered by reading the man page again Look at the last sentence: DESCRIPTION In the 1st form, create a link to TARGET with the name LINK_NAME. In the 2nd form, create a link to TARGET in the current directory. In the 3rd and 4th forms, create links to each TARGET in DIRECTORY. Create hard links by default, symbolic links with --symbolic. By default, each destination (name of new link) should not already exist. When creating hard links, each TARGET must exist. Symbolic links can hold arbitrary text; if later resolved, a relative link is interpreted in relation to its parent directory.Re: Edit: "However, why can't the shell parse out that path when you're running the command, rather than forcing the user to figure out what the path is & enter it him/herself?" Consider this example: Application A installs Library version 1.0.a. You build applications X,Y,Z that depend on Library A. Application A finds a bug, updates it and saves the library as 1.0.1.2.a. Since Applications X,Y,and Z still use library 1.0 if i replace 1.0 directly w/ 1.0.1.2, I'll get breakage, but if I symbolically link version 1.0.1.2 to version 1.0, nothing breaks, ln -s /usr/lib64/libfoo-1.0.1.2.a /usr/lib64/libfoo-1.0.aand applications X,Y, and Z get the new bugfix from the library applied to them too, because the shell follows the link from 1.0 to 1.0.1.2 but calls it 1.0. In cases like this you don't want the shell assuming the path as you increase the chance for system-wide breakage. BTW on 64-bit systems /usr/lib is linked to /usr/lib64, to remedy the example I just gave on a large scale, ie 32 bit applications expect libraries to be installed at /usr/lib, and on a 64 bit system there are no pure 32 bit libraries so /usr/lib is linked to /usr/lib64 like so: ln -s /usr/lib64 /usr/lib
As we all know, the ln command creates a link, with the default being a hard link and the -s option creating a symlink. The general syntax is ln [-s] OLD NEW, where OLD is the file you are linking to and NEW is the new file you are creating. Hard links can not be created for directories, as a hard link could be created between folders inside each other & I suppose computers do not yet have the resources to check for this without a SERIOUS slowdown. When creating the link, the path of both files must be written out, and can be absolute or relative. You can mix relative & absolute filepaths, i.e. have a relative path for the new file/folder & an absolute path for the old one. When creating a hard link with a relative path, the paths of both files are relative to the current folder, while for a symbolic link the path of the linked-to file/folder is relative to its parent folder but the path of the old file/folder is relative to the current folder. Why this is is "relative" to my question. For example, say we am in the HOME folder, /home/user, also known as ~, and create 2 folders, new and new2, with the file file in the folder new. If we try ln -s new/file new2/file, the result is a broken link from ~/new2/file to the currently nonexistent ~/new2/new/file. However, if we instead run ln -s ../new/file new2/file, we get the expected result, which is a link from ~/new2/file to ~/new/file. So, my question: Why is the file path for the OLD file/folder of a symlink relative to its parent, while the other 3 paths (hard link OLD, NEW files, symlink NEW file/folder) are relative to the current folder? All this is on Fedora, but I'm sure it applies to most UNIX-based OS's. EDIT E Carter Young seems to hit the nail on the head with regard to my 2nd question (as well as my 1st question, which was wrong anyway). It seems that for a symbolic link, the target doesn't have to exist yet, so the system has to make its path relative to the link rather than the current directory. However, why can't the shell parse out that path when you're running the command, rather than forcing the user to figure out what the path is & enter it him/herself? The shell seems to parse pretty well, so is this a case of legacy issues? Performance issues? What?
Seemingly Inconsistent Behavior for "ln" & "ln -s"
This works because D: is a valid directory name in Linux (and POSIX in general). It has no significance as far as Linux is concerned. (Some programs will treat certain directories named like this in a special way, in the appropriate directory; for example, Wine expects directories like this in the dosdevices directory inside a Wine prefix. But that’s specific to Wine, not something enforced by Linux.)
So, as mentioned here, 'ln -s /directory 'D:' maps the windows D: location to a Linux style directory. However, as far as I know, this goes against the Linux naming system. Why does Linux allow the use of Windows style directory starters in the ln command?
Why Does `'ln -s /directory 'D:'` Work Like It Does?
For a short utility script like this, I would suggest not making it interactive. The user likely already knows exactly what directory they want to make a copy of in the manner that you describe, and allowing the user to make use of tab-completion and/or filename globbing on the command line makes it more likely that they get the arguments right compared to if having to type them in from memory. Also, I would avoid using ls for anything other than looking at lists of files. Don't use ls in a script for reading filenames. It's better to just use a filename globbing pattern. This is discussed in some detail in this other Q/A: Why *not* parse `ls` (and what to do instead)? To exit a script, either let the execution run to the end, or use exit. Don't use kill. Also, in general, double quote all variable expansions to avoid word-splitting and unwanted filename globbing (see When is double-quoting necessary?). Your specific error is due to trying to use a shell variable in a brace expansion. This is not possible in bash (see e.g. Combine brace and variable expansion in one line). You would have further issues if the user wanted to work with directories with names starting with dashes.Rather than using ls and combining its output into some sort of list that is used in a brace expansion (which would disqualify filenames that contain commas, newlines and probably a few other characters), I would use rsync with its --link-dest option for this, wrapped into a shell function or shell script for convenience: #!/bin/sh -# The two expected arguments are # 1. an existing source directory, and # 2. a possibly existing destination directoryif [ ! -d "$1" ]; then printf 'Not a directory: %s\n' "$1" >&2 exit 1 fiif [ -e "$2" ] && [ ! -d "$2" ]; then printf 'Not a directory: %s\n' "$2" >&2 exit 1 firsync -a --link-dest="$(readlink -f -- "$1")" -- "$1/" "$2"We start by ensuring that the user has given us valid arguments. For this script, that means that the first argument (the source) must be an existing directory and that the second argument (the destination) must either not exist or be a directory, too. We use readlink -f to ensure that the absolute path to the source directory is used with the --link-dest option of rsync as it would otherwise interpret the path relative to the destination directory path. The readlink utility is not standard, but is commonly available. Using rsync in this way ensures that the source directory is recursively recreated at the destination directory path and that (if the source and destination are on the same filesystem) the non-directory files are hard-linked. If the source and destination paths are on different filesystems, the source directory path is recursively copied instead, without creating any hard links. Note that we add a slash to the end of the source argument in the call to rsync. We do this to copy the source directory's content, not the directory itself. I wrote the script as a /bin/sh script, not as a bash script, since it does not use any special bash features. Testing: $ tree src src |-- data.dat |-- subdir1 | `-- myfile.txt `-- subdir2 `-- myfile.txt2 directories, 3 files$ linkcp src dst1Notice how the link count for each of the files is 2: $ ls -l dst1 total 8 -rw-r--r-- 2 myself wheel 0 Apr 6 07:47 data.dat drwxr-xr-x 2 myself wheel 512 Apr 6 07:47 subdir1 drwxr-xr-x 2 myself wheel 512 Apr 6 07:47 subdir2 $ ls -l dst1/subdir* dst1/subdir1: total 0 -rw-r--r-- 2 myself wheel 0 Apr 6 07:47 myfile.txtdst1/subdir2: total 0 -rw-r--r-- 2 myself wheel 0 Apr 6 07:47 myfile.txt$ linkcp src dst2The link count now goes up to 3 as we made another copy of the src structure: $ ls -l dst2 total 8 -rw-r--r-- 3 myself wheel 0 Apr 6 07:47 data.dat drwxr-xr-x 2 myself wheel 512 Apr 6 07:47 subdir1 drwxr-xr-x 2 myself wheel 512 Apr 6 07:47 subdir2 $ ls -l dst2/subdir* dst2/subdir1: total 0 -rw-r--r-- 3 myself wheel 0 Apr 6 07:47 myfile.txtdst2/subdir2: total 0 -rw-r--r-- 3 myself wheel 0 Apr 6 07:47 myfile.txt
I'm new to bash scripting and wanted to make a script that will get user input and hard link all the files of a directory to another one. So far I have this:read -p "What directory do you want to link? " diritems=$(ls -m $dir -Q | tr -d '\n'' ')read -p "Where do you want to link the files? " outdirecho "Linking files: $items to: $outdir"read -p "Continue? (y/N) " confirmationif [ $confirmation == "y" ] || [ $confirmation == "Y" ]; then ln $dir/{$items} $outdir else kill -SIGTERM $$ fiWhen I input /home/ethan/test and /home/ethan/test2 I get ln: failed to access '/home/ethan/test/{"test1","test2"}': No such file or directory Also the contents of /home/ethan/test is two files, test1 and test2 and I'm on Arch. Could anyone help me figure out what needs to change to get it working? Thanks.
Need help with a script to link files in a directory
If you don't have write permission in the parent directory, you can't make any changes in the parent directory; this includes deleting the target directory, and creating a symlink. In any case, ln won't overwrite a directory, even with -f.
I would like to replace a directory with a symlink. For the directory itself I have full permissions (rwx), but for the parent directory I don't have write permissions (r-x). Is this possible? The man page for ln states that -f removes existing destination files, which sounds like it would first would delete the directory, then fail to create the symlink, leaving me with nothing.
Can I replace a directory with a symlink without write permissions in parent?
Assuming you're using GNU tools, you could determine the absolute paths to link and target and use ln -r or use realpath's --relative-to option to create relative link targets. Here's a minimal example without sanity checks or cleanup of the link backup: #!/bin/bashlink=$(realpath -s "$1") # absolute path to link target=$(realpath "$1") # absolute path to targetmv -vf "$link"{,.bak} # create link backup mv -vf "$target" "$link" # move target# a) use ln -r ln -vsr "$link" "$target"# b) or determine the relative path #target_dir=$(dirname "$target") #relpath=$(realpath --relative-to="$target_dir" "$link") #ln -vs "$relpath" "$target"
I'm trying to create a script to swap the location of a symbolic link in the current directory and the target file in another (relative) directory. Unfortunately, I can't figure out how to make the link in the non-local directory and searches for info on 'ln -s' only retrieve simple case uses. I realize I could 'cd' to the other directory within my script, but I figure there must be a more 'elegant' way. Here's what I have #!/bin/bash # # Script to help organize repositories. # Finds local links and swaps them with their target. # # Expects 1 argumentsif [ "$#" -ne 1 ]; then echo "Error in $0: Need 1 arguments: <SYMBOLICLINK>"; exit 1; fiLINK="$1"; LINKBASE="$(basename "$LINK")"; LINKDIR="$(dirname "$LINK")"; LINKBASEBKUP="$LINK-bkup";TARGET="$(readlink "$LINK")"; TARGETBASE="$(basename "$TARGET")"; TARGETDIR="$(dirname "$TARGET")";echo "Link: $LINK"; echo "Target: $TARGET";#Test for broken link # test if symlink is broken (by seeing if it links to an existing file) if [ -h "$LINK" -a ! -e "$LINK" ] ; then echo "Symlink $LINK is broken."; echo "Exiting\n"; exit 1; fimv -f "$TARGET" "/tmp/."; # printf "$TARGET copied to /tmp/\n" || exit 1; mv -f "$LINK" "/tmp/$LINKBASEBKUP";# && # printf "$LINKBASE copied to /tmp/$LINKBASEBKUP\n"; # ||# { printf "Exiting"; exit 1; } # and alternative way to check errors # [ $? -neq 0 ] && printf "Copy $LINK to $REFDIRNEW failed\nExiting"mv "/tmp/$TARGETBASE" "$LINK"; #what was a link is now a file (target) ln -s "$LINK" "$TARGET"; #link is target and target is linkif [ $? -eq 0 ]; then echo "Success!"; exit 0 else echo "Couldn't make link to new target. Restoring link and target and exiting"; mv "/tmp/$LINKBASEBKUP" "$LINK"; mv "/tmp/$TARGETBASE" "$TARGET"; exit 1; fiAny help woudl be appreciated.
Help with script swapping target and symbolic links
Accessing .. doesn't really work as you expect when symlinks are involved... And when you try that in bash, bash tries to be "helpful" and fixes it for you, so the problem does not become aparent. But, in short, when you go to /home/me/project2/src/hdmap/../third_party, the kernel will first resolve the symlink of "hdmap", getting to /home/me/repo/robot_dev/cognition/hdmap, then look up .. which means the parent directory of that hdmap directory, so /home/me/repo/robot_dev/cognition and then it will try to find a third_party in there. Considering /home/me/repo/robot_dev/cognition/third_party does not exist (or, if it does, it's not the same as /home/me/repo/sim/third_party, which is what you want), you get a file not found error. Bash keeps a $PWD variable with the path stored as a string, and so it can help "resolve" the .. references in the shell itself before it passes the kernel a path... That way, it will hide these details from you. You can disable that behavior using set -P (or set -o physical) in bash. (See bash man page for more details.)In order to help you with your underlying problem... Perhaps a decent solution is to use cp -rl to copy the trees. The -l option to cp creates hardlinks. That should not be a problem in this case, particularly as I imagine you're not expecting to modify the files. It will still have to traverse the data structure and create each object individually, but it will not have to create any contents... If you're working on a modern filesystem (such as Btrfs), you can also try cp -r --reflink which creates a CoW (copy-on-write) copy. It's essentially a hardlink, but slightly better since there's no connection between the two names, they're just sharing blocks on the backend, touching one of the files will actually fork them into two separate files. (But I imagine hardlinks are probably good enough for you.) Perhaps there are some tricks you could do with git, in order to just expose the directory you need in each step... So you can actually clone the parts you need... But that might be harder to accomplish or to maintain... Hopefully cp -rl will be enough for you!
I'm working in a team, which develops a c++ (ROS) project. For some reason, we haven't a good git management. We have several git branches. To compile the project, I have to git clone the codes from each branch and rearrange the structure of directories. First, I mkdir -p /home/me/repo, then I git clone the codes from each branch and put all of them into /home/me/repo. Now I need to rearrange the structure, here is what I've done: #!/bin/shmkdir -p /home/me/project/src cd /home/me/project/src catkin_init_workspace # command of ROS to initialize a workspacecp -r /home/me/repo/robot_dev/control . cp -r /home/me/repo/robot_dev/control_algo . cp -r /home/me/repo/sim/third_party . cp -r /home/me/repo/planning . cp -r /home/me/repo/robot_dev/cognition/hdmap .cd .. catkin_make # command of ROS to compile the projectI created such a script to compile the project and it worked. As you can see, I simply copied and rearranged some directories and compiled. Now I'm thinking that cp -r is not a good idea because it took too much time. I want to use ln -s to do the same thing. So I wrote another script as below: #!/bin/shmkdir -p /home/me/project2/src cd /home/me/project2/src catkin_init_workspace # command of ROS to initialize a workspaceln -s /home/me/repo/robot_dev/control control ln -s /home/me/repo/robot_dev/control_algo control_algo ln -s /home/me/repo/sim/third_party third_party ln -s /home/me/repo/planning planning ln -s /home/me/repo/robot_dev/cognition/hdmap hdmapcd .. catkin_make # command of ROS to compile the projectHowever, I got an error: CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkin_package.cmake:297 (message): catkin_package() absolute include dir '/home/me/project2/src/hdmap/../third_party/ubuntu1604/opencv-3.3.1/include'I've checked cd /home/me/project2/src/hdmap/../third_party/ubuntu1604/opencv-3.3.1/include, it does exist. Is there some reasons why cp -r can compile but ln -s can't?
Why can't a project compile with symbol link
( ! -path "..." -type f -name ... -o -name ...gz ) -exec ...is parsed as ( ( ! -path "..." -type f -name ... ) -o ( -name ...gz ) ) -exec ...because (the implied) and binds stronger than or. You probably want ! -path "..." -type f ( -name ... -o -name ...gz ) -execto have the ! -path (and -type) filter apply to *.gz files too. The final -exec is also part of the implied and chain, so other than the pair with -o in between, the actions don't need parenthesis around them. With your expression in full: find "$PWD" ! -path "$PWD/FASTQC" -type f \( -name *.f*q -o -name *.f*q.gz \) \ -exec ln -sv {} "$PWD/FASTQC" \;
I am in the parent directory and there are a number of files with .fastq, .fq, .fastq.gz and .fq.gz extensions in different subdirectories. I created a subdirectory named FASTQC and want to create symlinks to all of them in that subdirectory. When I try: find "$PWD" \( ! -path "$PWD/FASTQC" -type f -name *.f*q -o -name *.f*q.gz \) -exec ln -sv {} "$PWD/FASTQC" \;I get symlinks to all my files in FASTQC, but also the following error messages: ln: failed to create symbolic link '/XXX/YYY/ZZZ/aaa.fastq.gz': File existsWhen I execute the following two commands instead, I get all the symlinks created without any error messages. find "$PWD" \( ! -path "$PWD/FASTQC" -type f -name *.f*q \) -exec ln -sv {} "$PWD/FASTQC" \; find "$PWD" \( ! -path "$PWD/FASTQC" -type f -name *.f*q.gz \) -exec ln -sv {} "$PWD/FASTQC" \;Why do I get error messages with the first command? EDIT: In case someone finds this question later through Google, here is the final working version (thank you, ilkkachu and steeldriver): find "$PWD" ! -path "$PWD/FASTQC" -type f \( -name "*.f*q" -o -name "*.f*q.gz" \) \ -exec ln -sv {} "$PWD/FASTQC" \;
ln -s reports that a procedure failed, but creates symlinks nonetheless
The percent sign is special in crontab and needs to be escaped if you put your date command there (see man 5 crontab). Your symbolic link points to a directory. When you run ln again, it will put the link inside that directory. Example: $ mkdir real $ ln -sf real link $ tree . |-- link -> real `-- real1 directory, 1 file$ ln -sf real link $ tree . |-- link -> real `-- real `-- real -> real1 directory, 2 filesThe solution is to use ln with -n (or --no-dereference) on Linux or on any system with GNU coreutils' ln, and with -h on BSD. This would cause ln to not descend into the directory that the link points to before creating the new link. A portable solution would be to first explicitly remove the link using rm: ln -s some_directory linkLater: rm link && ln -s some_directory link
Background The PNG image files I want to use is stored in directories according to date, for example: /NAS-mein/data/201812/ PNG stored within it like /NAS-mein/data/201812/foo/bar/20181231_1500.png So I created a symbolic link PNG_path in my home directory ln -s /NAS-mein/data/201812/ PNG_path and I'm able to update it manually through: ln -sf /NAS-mein/data/201812/ PNG_path which works fine and returns `PNG_path' -> `/NAS-mein/data/201812' I'm in a CentOS 6.7 environment and I don't have superuser privilege. The destination directory is created by the others but granted 777 permission, i.e.: drwxrwxrwx /NAS-mein/ drwxrwxrwx /NAS-mein/data/ drwxrwxrwx /NAS-mein/data/201812/ With Crontab Then I tried to automatically update this symbolic link on the first day of month, so it will always redirect me to the directory of current date. I tried start a job in crontab like: 0 0 1 * * ln -sf /NAS-mein/data/$(date "+%Y%m") /home/me/PNG_path >>/home/me/.pngln.log 2>>&1 but this does not work, even without giving any information to the log. So I tried: 0 0 1 * * cd /home/me/ && ln -sf /NAS-mein/data/$(date "+%Y%m") PNG_path >>.pngln.log 2>>&1 and wrap it into a Bash script like: #!/bin/bash /bin/unlink "/home/me/PNG_path" /bin/ln -sf /NAS-mein/data/$(date "+%Y%m") PNG_path >>/home/me/.pngln.log 2>>&1 but all of above seem not working as the symbolic link does not change, and no any information was logged (i.e. .pngln.log is not created anyway.) I'm not sure where I did it wrong, or using ln in crontab is just not a legit use? Edit: I notice that I didn't write the most suspicious part: using date function in ln expression.
How to regularly update symbolic link (ln -sf) via crontab
In bash shell you need to use extglob option for this OR type shell expansions. shopt -s extglob nullgloband then do the globbing as ln -s /path/**/@(*foo*bar*|*bar*foo*)
I want to target all files called fooxxxbarxxx. The common thing among all those files is that it contains foo and bar. I've tried to use *foo*bar* and *foo**bar* but it doesn't work. Specifically, I'm trying to create soft links to those files, and the rest of the code already works for more straightforward executions (looks into all subfolders of path): shopt -s globstar ln -s /path/**/*foo*bar* . Thanks
How can I stack wildcards to target specific files?
locate(1) has only one big advantage over find(1): speed. find(1), though, has many advantages over locate(1):find(1) is primordial, going back to the very first version of AT&T Unix. You will even find it in cut-down embedded Linuxes via Busybox. It is all but universal. locate(1) is much younger than find(1). The earliest ancestor of locate(1) didn't appear until 1983, and it wasn't widely available as "locate" until 1994, when it was adopted into GNU findutils and into 4.4BSD. locate(1) is also nonstandard, thus it is not installed by default everywhere. Some POSIX type OSes don't even offer it as an option, and where it is available, the implementation may be lacking features you want because there is no independent standard specifying the minimum feature set that must be available. There is a de facto standard, being BSD locate(1), but that is only because the other two main flavors of locate implement all of its options: -0, -c, -d, -i, -l, -m, -s, and -S. mlocate implements 6 additional options not in BSD locate: -b, -e, -P, -q, --regex and -w. GNU locate implements those six plus another four: -A, -D, -E, and -p. (I'm ignoring aliases and minor differences like -? vs -h vs --help.) The BSDs and Mac OS X ship BSD locate. Most Linuxes ship GNU locate, but Red Hat Linuxes and Arch ship mlocate instead. Debian doesn't install either in its base install, but offers both versions in its default package repositories; if both are installed at once, "locate" runs mlocate. Oracle has been shipping mlocate in Solaris since 11.2, released in December 2014. Prior to that, locate was not installed by default on Solaris. (Presumably, this was done to reduce Solaris' command incompatibility with Oracle Linux, which is based on Red Hat Enterprise Linux, which also uses mlocate.) IBM AIX still doesn't ship any version of locate, at least as of AIX 7.2, unless you install GNU findutils from the AIX Toolbox for Linux Applications. HP-UX also appears to lack locate in the base system. Older "real" Unixes generally did not include an implementation of locate. find(1) has a powerful expression syntax, with many functions, Boolean operators, etc. find(1) can select files by more than just name. It can select by:age size owner file type timestamp permissions depth within the subtree...When finding files by name, you can search using file globbing syntax in all versions of find(1), or in GNU or BSD versions, using regular expressions. Current versions of locate(1) accept glob patterns as find does, but BSD locate doesn't do regexes at all. If you're like me and have to use a variety of machine types, you find yourself preferring grep filtering to developing a dependence on -r or --regex. locate needs strong filtering more than find does because... find(1) doesn't necessarily search the entire filesystem. You typically point it at a subdirectory, a parent containing all the files you want it to operate on. The typical behavior for a locate(1) implementation is to spew up all files matching your pattern, leaving it to grep filtering and such to cut its eruption down to size. (Evil tip: locate / will probably get you a list of all files on the system!) There are variants of locate(1) like slocate(1) which restrict output based on user permissions, but this is not the default version of locate in any major operating system. find(1) can do things to files it finds, in addition to just finding them. The most powerful and widely supported such operator is -exec, but there are others. In recent GNU and BSD find implementations, for example, you have the -delete and -execdir operators. find(1) runs in real time, so its output is always up to date. Because locate(1) relies on a database updated hours or days in the past, its output can be outdated. (This is the stale cache problem.) This coin has two sides:locate can name files that no longer exist. GNU locate and mlocate have the -e flag to make it check for file existence before printing out the name of each file it discovered in the past, but this eats away some of the locate speed advantage, and isn't available in BSD locate besides. locate will fail to name files that were created since the last database update.You learn to be somewhat distrustful of locate output, knowing it may be wrong. There are ways to solve this problem, but I am not aware of any implementation in widespread use. For example, there is rlocate, but it appears to not work against any modern Linux kernel. find(1) never has any more privilege than the user running it. Because locate provides a global service to all users on a system, it wants to have its updatedb process run as root so it can see the entire filesystem. This leads to a choice of security problems:Run updatedb as root, but make its output file world-readable so locate can run without special privileges. This effectively exposes the names of all files in the system to all users. This may be enough of a security breach to cause a real problem. BSD locate is configured this way on Mac OS X and FreeBSD. Write the database as readable only by root, and make locate setuid root so it can read the database. This means locate effectively has to reimplement the OS's permission system so it doesn't show you files you can't normally see. It also increases the attack surface of your system, specifically risking a root escalation attack. Create a special "locate" user or group to own the database file, and mark the locate binary as setuid/setgid for that user/group so it can read the database. This doesn't prevent privilege escalation attacks by itself, but it greatly mitigates the damage one could cause. mlocate is configured this way on Red Hat Enterprise Linux. You still have a problem, though, because if you can use a debugger on locate or cause it to dump core you can get at privileged parts of the database.I don't see a way to create a truly "secure" locate command, short of running it separately for each user on the system, which negates much of its advantage over find(1).Bottom line, both are very useful. locate(1) is better when you're just trying to find a particular file by name, which you know exists, but you just don't remember where it is exactly. find(1) is better when you have a focused area to examine, or when you need any of its many advantages.
In Linux and Unix systems there are two common search commands: locate and find. What are the pros and cons of each? When one have benefits over the other?
locate vs find: usage, pros and cons of each other
The command is: sudo updatedbSee man updatedb for more details.
How can I update the cache / index of locate? I installed new packages and the files are clearly not yet indexed. So which command do I have to commit, in order for the indexer to trigger? I'm currently working on debian jessie (testing): with Linux mbpc 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux
How to update Linux "locate" cache
The locate package is the implementation of locate from GNU findutils. The mlocate package is another implementation of the same concept called mlocate. They implement the same basic functionality: quick lookup of file names based on an index that's (typically) rebuilt every night. They differ in some of their functionality beyond basic usage. In particular, GNU locate builds an index of world-readable files only (unless you run it from your account), whereas mlocate builds an index of all files but only lets the calling user see files that it could access. This makes mlocate more useful in most circumstances, but unusable in some unusual installations where it isn't run by the system administrator (because mlocate has to be setuid root), and a security risk. Under Debian and derivatives, if you install both, locate will run the mlocate implementation, and you need to run locate.findutils to run the GNU implementation. This is managed through alternatives. If you have both installed, they'll both spend time rebuilding their respective index, but other than that they won't conflict with each other.
I wanted to install the command locate, which is available via sudo apt-get installmlocate. However, I first ran sudo apt-get installlocate which seems to have installed something else. Typing the command locate <package> however seems to call upon mlocate. What is the package locate, and can (should) it be safely removed?
Difference between locate and mlocate
The cron job is defined in /etc/cron.daily/mlocate. To run it immediately: sudo updatedbor better sudo ionice -c3 updatedbThis is better because updatedb is set in the Idle I/O scheduling class, so that it do not disturb (from the I/O point of view) other applications. From ionice man page: -c class The scheduling class. 0 for none, 1 for real time, 2 for best-effort, 3 for idle. ........................ Idle A program running with idle io priority will only get disk time when no other program has asked for disk io for a defined grace period. The impact of idle io processes on normal system activity should be zero. This scheduling class does not take a priority argument. Presently, this scheduling class is permitted for an ordinary user (since kernel 2.6.25).
On a new Ubuntu 10.4 instance, I tried to use the locate command only to receive the error locate: can not stat () `/var/lib/mlocate/mlocate.db': No such file or directoryfrom using this command on other systems I'm guessing that this means the database has not yet been built (it is a fresh install). I believe it is supposed to run daily, but how would I queue it up to run immediately? Also, how is "run daily" determined? If I have a box that I only turn on for an hour at a time will the database ever be built on it's own?
How do I enable locate and queue the database to be built?
Implementations of locate/updatedb typically use specific databases tailored to their requirements, rather than a generic database engine. You’ll find those specific databases documented by each implementation; for example:GNU findutils’ is documented in locatedb(5), and is pretty much just a list of files (with a specific compression algorithm); mlocate’s is documented in mlocate.db(5), and can also be considered a list of directories and files (with metadata).
The locate program of findutils scans one or more databases of filenames and displays any matches. This can be used as a very fast find command if the file was present during the last file name database update. There are many kinds of databases nowadays, relational databases (with query language e.g. SQL), NoSQL databasesdocument-oriented databases (e.g. MongoDB) Key-value database (e.g. Redis) Column-oriented databases (e.g. Cassandra) Graph databaseSo what kind of database does updatedb update and locate use? Thanks.
What kind of database do `updatedb` and `locate` use?
You have to run the updatedb command as the super user. For example, sudo updatedb
I go to use the updatedb command to update the index and I get updatedb: can not open a temporary file for `/var/lib/mlocate/mlocate.db'fyi The locate command is working, e.g. $ locate Index.xml /usr/share/mysql/charsets/Index.xml durrantm.../durrantm$ How can I overcome this issue when trying to run updatedb?
updatedb: can not open a temporary file for `/var/lib/mlocate/mlocate.db'
I recommend locate. sudo apt-get install locate
I often use "locate" command on CentOs to find files. What's the alternative for this command on Debian ?
Alternative for "locate" on debian
There is no option to use locate to find selected type of file (like directory), but you can use syntax from your question - Dropnot$ to find lines that ends with Dropnot. For that you must use -e option to locate to turn on POSIX regular expression. In this case you should use: locate -e Dropnot$It is important what version of locate you have. In my system (Gentoo Linux) I have Secure Locate: $ locate --version Secure Locate 3.1 - Released March 7, 2006in which there is no --basename option from uther's answer. This option is provided by GNU Locate from findutils package: $ ./locate --version locate (GNU findutils) 4.4.2If you want to use regexp with GNU Locate you should use -r switch instead -e.
This finds a large number of files that are under various subdirectories of "Dropnot" $ locate DropnotCan I find just the directory location with locate? (which directory "Dropnot" is in) So if Dropnot is in /home/me/, that's the only entry that gets returned. If so, what's the simplest / shortest way ? Preferably through a flag or symbol rather than piping out and greping for it, etc, but I'd take anything as an option. Maybe some sort of Dropnot$ for end of line? (but didn't work).
How can I use locate only for a directory
locate filename find -name '*filename*' echo **/*filename* ls -ld **/*filename*(Read on for the main terms and conditions. Read the manual for the fine print.) Listing the contents of a directory is kind of a secondary feature of ls. The main job of ls, the one that takes up most of its complexity, is fine-tuning its display. (Look at the manual and compare the number of options related to choosing what files to display vs. the number of options that control what information to display about each file and how the display is formatted. This is true both of GNU ls which you'll find on Linux, and of other systems with fewer options, since the early days.) The default mode of ls is that when you pass it a directory, it lists the files in that directory. If you pass it any other type of file (regular file, symbolic link, etc.), it lists just that file. (This applies to each argument separately.) The option -d tells ls never to descend into a directory. ls does have an option -R that tells it to list directories recursively. But it's of limited applicability, and doesn't allow much filtering on the output. The very first tool to perform pattern matching is the shell itself. You don't need any other command: just type your wildcards and you're set. This is known as globbing. echo *filename*Traditionally, wildcards were limited to the current directory (or the indicated directory: echo /some/where/*filename*). A * matches any file name, or any portion of file name, but *.txt will not match foo/bar.txt. Modern shells have added the pattern **/ which means β€œin this directory, or in any directory below it (recursively)”. With bash, for historical compatibility reasons, this feature needs to be explicitly enabled with shopt -s globstar (you can put this line in your ~/.bashrc). echo **/*filename*The echo command just echoes the list of file names generated by the shell back at you. As an exception, if there is no matching file name at all, the wildcard pattern is left unchanged in bash (unless you set shopt -s nullglob, in which case the pattern expands to an empty list), and zsh signals an error (unless you set setopt nullglob, or setopt no_no_match which causes the pattern to be left unchanged). You may still want to use ls for its options. For example, ls can give indications about the nature or permissions of the file (directory, executable, etc.) through colors. You may want to display the file's date, size and ownership with ls -l. See the manual for many more options. The traditional command to look for a file in a directory tree is find. It comes with many options to control which files to display and what to do with them. For example, to look for files whose name matches the pattern *filename* in the current directory and its subdirectories and print their names: find /some/dir -name '*filename*' -print-print is an action (most other actions consist of executing a command on the file); if you don't put an action, -print is implied. Also, if you don't specify any directory to traverse (/some/dir above), the current directory is implied. The condition -name '*filename' says to list (or act on) only the files whose name matches that pattern; there are many other filters, such as -mtime -1 to match the files modified in the last 24 hours. You can sometimes omit the quotes on -name '*filename*', but only if the wildcard would not match any file in the current directory (see above). All in all, the short form is find -name '*filename*'Another useful tool when you know (part of) the name of a file is locate. This tool queries a database of file names. On typical systems, it's refreshed every night. The advantage of locate over find / is that it's a lot faster. A downside is that its information may be stale. There are several implementations of locate which differ in their behavior on multi-user systems: the basic locate program indexes only publicly-readable files (you may want to run the companion updatedb to make a second database that indexes all the files in your account); there are other versions (mlocate, slocate) that index all files and have the locate program filter the database to only return the files you can see. locate filenameSometimes you think that a file is provided by a package in your distribution, you know (part of) the name of the file but not the name of your package, and you'd like to install the package. Many distributions provide a tool for that. On Ubuntu, it's apt-file search filename. For equivalent commands on other systems, check the Pacman Rosetta.
I'd like to find where a file (with a partially-known filename) is in the file system. I'd like to know how to do this from the command line, rather than using a GUI utility. In Windows, I'd run the following: cd /d C:\ dir *filename* /sWhat's the Linux equivalent?
How to find a file in the filesystem from the command line?
See the man page for updatedb, "If the database already exists, its data is reused to avoid rereading directories that have not changed". Whereas the find command traverses all directories regardless of whether they have changed.
How is updatedb so much faster than find? Here's a timed comparison between updatedb and a find command that does a seemingly similar task. compare.sh #!/usr/bin/env bashcmd="sudo updatedb" echo $cmd time eval $cmdcmd="sudo find / \ -fstype ext4 \ -not \( \ -path '/afs/*' -o \ -path '/net/*' -o \ -path '/sfs/*' -o \ -path '/tmp/*' -o \ -path '/udev/*' -o \ -path '/var/cache/*' -o \ -path '/var/lib/pacman/local/*' -o \ -path '/var/lock/*' -o \ -path '/var/run/*' -o \ -path '/var/spool/*' -o \ -path '/var/tmp/*' -o \ -path '/proc/*' \ \) &>/dev/null"echo $cmd time eval $cmdMy /etc/updatedb.conf: PRUNE_BIND_MOUNTS = "yes" PRUNEFS = "9p afs anon_inodefs auto autofs bdev binfmt_misc cgroup cifs coda configfs cpuset cramfs debugfs devpts devtmpfs ecryptfs exofs ftpfs fuse fuse.encfs fuse.sshfs fusectl gfs gfs2 hugetlbfs inotifyfs iso9660 jffs2 lustre mqueue ncpfs nfs nfs4 nfsd pipefs proc ramfs rootfs rpc_pipefs securityfs selinuxfs sfs shfs smbfs sockfs sshfs sysfs tmpfs ubifs udf usbfs vboxsf" PRUNENAMES = ".git .hg .svn" PRUNEPATHS = "/afs /net /sfs /tmp /udev /var/cache /var/lib/pacman/local /var/lock /var/run /var/spool /var/tmp"For the find command I just specified ext4 filesystem because that's the only filesystem updatedb should end up looking through. I didn't bother with the file extensions and I don't know how to exclude a bind mount from find but I don't have any. I also added an exclusion for '/proc' which it seems that updatedb ignores. I should have also ignored '/sys'. If there'd be any difference I'd expect the find command to be a little faster since it's rules are a little simpler and it doesn't have to write to disk. Instead updatedb is much faster. $ ./compare.sh sudo updatedbreal 0m0.876s user 0m0.443s sys 0m0.273ssudo find / -fstype ext4 -not \( -path '/afs/*' -o -path '/net/*' -o -path '/sfs/*' -o -path '/tmp/*' -o -path '/udev/*' -o -path '/var/cache/*' -o -path '/var/lib/pacman/local/*' -o -path '/var/lock/*' -o -path '/var/run/*' -o -path '/var/spool/*' -o -path '/var/tmp/*' -o -path '/proc/*' \) &>/dev/nullreal 6m23.499s user 0m14.527s sys 0m10.993sWhat are they doing differently?
How is updatedb so much faster than find?
There's no option for that in updatedb.conf. You'll have to arrange to pass options to updatedb manually. With updatedb from GNU findutils, pass --localpaths. updatedb --localpaths '/ /media/win_c/somewhere/Music /media/win_c/somewhere/Photos'With updatedb from mlocate, there doesn't appear a way to specify multiple roots or exclude a directory from pruning, so I think you're stuck with one database per directory. Set the environment variable LOCATE_PATH to the list of databases: updatedb --output ~/.media.mlocate.db --database-root /media/win_c/somewhere --prunepaths '/media/win_c/somewhere/Videos'export LOCATE_PATH="$LOCATE_PATH:$HOME/.media.mlocate.db"
I keep my digital music and digital photos in directories in a Windows partition, mounted at /media/win_c on my dual-boot box. I'd like to include those directoriesβ€”but only those directoriesβ€”in the locate database. However, as far as I can make out, updatedb.conf only offers options to exclude directories, not add them. Of course, I could remove /media from PRUNEPATHS, and then add a whole bunch of subdirectories (/media/win_c/Drivers, /media/win_c/ProgramData...) but this seems a very clunky way of doing itβ€”surely there's a more elegant solution? (I tried just creating soft links to the Windows directories from an indexed linux partition, but that doesn't seem to help.)
How to add specific directories to "updatedb" (locate) search path?
locate -e0 '*/pg_type.h' | xargs -r0 catlocate pg_type.h would find all the files with pg_type.h in their path (so for instance if there was a rpg_type.horn directory, you'd end up displaying all the files in there). Without -0 the output of locate can't be post-processed because the files are separated by newline characters while newline is a perfectly valid character in a file name. cat without arguments writes to stdout what it reads from stdin, so locate | cat would be the same as locate, cat would just pass the output of locate along. What you need is to pass the list of files as arguments to cat. That's what xargs is typically for: convert a stream of data into a list of arguments. -r is to not call cat if there's no input. Without -0 (which like -r is not standard but found on many implementations, at least those where xargs is useful to anything), xargs would just look for words in its input to convert into arguments, where words are blank separated and where backslash, single and double quotes can be used to escape those separators, so typically not the format locate uses to display file names. That's why we use the -0 option for both locate and xargs which uses the NUL character (which is the only character not allowed in a file path) to separate file names. Also note that locate is not a standard command and there exist a great number of different implementations with different versions thereof and different options and behaviours. The code above applies at least to relatively recent versions of the GNU locate and mlocate implementations which are the most common on Linux based operating systems at least.
I am trying to perform this: locate pg_type.h | catBut this command simply does nothing different than locate pg_type.h What should I change ? I want to perform cat pg_type.h wherever pg_type.h may be.
Why does pipe not work with cat and locate?
You can use xargs: locate filename123 | xargs viBy default xargs will execute as few instances of the specified command as possible, passing as many parameters as possible according to the system's ARG_MAX. To limit the number of parameters passed to an instance of vi, use xargs' -n option. To handle file names containing spaces use xargs' -d option: locate filename123 | xargs -d '\n' viTo handle file names containing newlines use xargs' -0 option together with locate's -0 option: locate -0 filename123 | xargs -0 vi(If -0 is not available on any of them, check for --null too, or another way to specify character \000 as delimiter.)
Suppose that I have a file named filename123.txt and it is the single file that is named so, and I can locate it with the command locate filename123. And it returns only this file. Now I want to open it with vi/vim. But I don't want to go to that location and type the vi command followed by filename. Here I want the result of locate filename123 to be appended to the vi command. How can I do so? I already tried: locate filename123 | viBut this does not works. And this error comes in terminal: santosh@santosh:~$ locate filename123 | vi Vim: Warning: Input is not from a terminal Vim: Error reading input, exiting... Vim: Finished.
How to `locate` multiple files and open them in vim?
Not easily. You can use locate bash | while IFS= read -r line; do [[ -x "$line" ]] && echo $line; doneto find all executables where the name contains bash. This is faster than using find across the whole filesystem because only a few files need to be checked.locate bash does what it always does (lists all matches) | (pipe) takes the output from the first command (locate) and sends it to the second one (the rest of the line) the while ...; do ... done loop iterates over every line it receives from the pipe (from locate) read -r line reads one line of input and stores it in a variable called line (in our case, a path/file name) [[ -x "$line" ]] tests whether the file in $line is executable if it is, the && echo $line part prints it on your screen
locate gtags would find all the files named gtags. What if I only need executables, is there any way to do this?
How find only executable files using 'locate'?
No, there's no such option, as well it shouldn't have any. If you need to measure that, you must first know how many files are present on your system, that means loop through everything twice, it can be slow One evicent example is that if you extract kernel source code with file-roller, it's slower than doing the same thing with tar directly, because file-roller need to file out all files first(otherwise the progress bar might be incorrectly displayed), and you wait for a while before extraction process actually began.
Is it possible to get a reliable progress bar (or just a reliable information how long it will take) when doing updatedb?
Progress bar in updatedb
That depends on the locate you use. There are a couple of implementations, with identical executable names, but various package names: locate, slocate, mlocate, rlocate. Usually they all have -i and/or --ignore-case switch. Consult your locate's man page for the exact syntax. Also usually they have no configuration file, so if you want to set the case-insensitiveness persistently, set an alias in your .bashrc (or similar) file: alias locate='locate -i'.
I wonder how to use the command locate to search for words that are not case sensitive? Such as modify locate normal to search for results that have "Normal" and "normal".
'locate' for case-insensitive words?
In updatedb.sh line 175 gives a hint:PRUNEREGEX=`echo $PRUNEPATHS|sed -e 's,^,\\\(^,' -e 's, ,$\\\)\\\|\\\(^,g' -e 's,$,$\\\),'`There the $PRUNEPATHS is handled like plain text, the ' ' characters are replaced and no escaping is possible. To ensure the space survives that line 175, you must denote it without explicitly mentioning it. The best way I know is to use \s, which means a whitespace character: PRUNEPATHS='/path/to/Program\sFiles\s(x86)'(That will also include tab and newline characters, but in this case will be fine for you.) Another way is to set $PRUNEREGEX directly, as updatedb would do in line 175: PRUNEREGEX='\(^/path/to/Program Files (x86)$\)'There you separate multiple paths with \|, so space is not an issue anymore: PRUNEREGEX='\(^/path/to/Program Files (x86)$\)\|\(^/foo/bar$\)'
I would like to exclude some Windows folders on an NTFS mount from being indexed by locate. I'm familiar with the PRUNEPATHS syntax in /etc/updatedb.conf. It is a white-space separated list of directory names. My problem is that I want to exclude directories that contain white space themselves (e.g. Program Files (x86)). I tried backslash escapes but that didn't work.
How to exclude directories with blanks via locate's PRUNEPATHS?
The locate database is generally configured to omit files on removable disks, since they can't be assumed to be there later. It can be configured through a file such as /etc/updatedb.conf (the location depends on which of the several locate programs you use and how it is configured by your distribution). For a removable disk, it is probably better to keep the database in a separate file. Run updatedb --localpaths=/media/my_removable_disk --output=/var/cache/locate/my_removable_disk.locatedb to update the database. Add /var/cache/locate/my_removable_disk.locatedb to the environment variable LOCATE_PATH; for reasonably recent versions of GNU locate, an empty path component stands for the default path, so you can use export LOCATE_PATH=:/var/cache/locate/my_removable_disk.locatedbIf you want to keep the locate database on the removable disk, don't add the path to LOCATE_PATH, because locate stops looking if one of the database files is missing. A wrapper script would be better: locates () { locate "$@" for d in /media/*; do locate -d "$d/.locatedb" "$@" done }
If I understand correctly, the database locate relies on is just for files on partitions of internal HDDs. I wonder if it is possible to use locate on external HDDs?
Make `locate` able to search files on external HDD
If you are unable to find the file with the below command then try updatedb for updating db used by locate command. locate -r foot/bar/or # locate "/*/bar/avi" /foot/bar/avifind command can also do this find / -path */foot/bar* find / will search the whole system starting from /
Is it possible to locate a path in the file system like what can be done for file names? For example I want to find all paths in system that include 'foo/bar', which may have the following result:/home/myname/test/foo/bar/hello /var/www/site/foo/bar
How to locate a path?
On macOS and with the HFS+ file system at least, accented characters are encoded in their decomposed form so Γ  is encoded as a\u300 (a followed by the combining grave accent combining character) even if you created the file with touch $'\ue0' (the pre-composed form (stand-alone a with grave accent), causing all sorts of bugs (and subject of one of Linus Torvald's famous rants) like for its pseudo-case insensitiveness. You'll notice that if you do: touch Γ ; echo ?to list the file names made of one character, it returns nothing while: echo ??or echo *a*Does return that Γ  (actually aΜ€). And: $ echo ?? | uconv -x name \N{LATIN SMALL LETTER A}\N{COMBINING GRAVE ACCENT}\N{<control-000A>}So you'd need: rename $'s/a\u300/a/g' ./*(assuming zsh or compatible shell). Or using specifying the UTF-8 encoding of that U+0300 character (0xcc 0x80) by hand, for shells that support the ksh93 $'...' quotes but not zsh's $'\u300' (like the ancient version of bash found on macOS): rename $'s/a\xcc\x80/a/g' ./*Or let perl interpret those \xcc\x80 sequences directly: rename 's/a\xcc\x80/a/g' ./*Or the unicode character: PERL_UNICODE=AS rename 's/\x{300}//' ./*Or remove all combining characters with: PERL_UNICODE=AS rename -n 's/\pM//g' ./*There, we're telling perl to consider Arguments and Stdio streams are encoded in UTF-8 (see perldoc perlrun for a description of the $PERL_UNICODE env var equivalent to the -C option) and remove all the characters that have the Mark Unicode property (\pM is short for \p{Mark} or \p{Combining_Mark}, see perldoc perluniprops for details) Note that you should be able to list that file (in zsh) both with: ls -d $'a\u300'and: ls -d $'\ue0'(and $'A\u300' and possibly $'\uc0 for Γ€ as it's meant to be case insensitive), but: ls -d *A*and in shells other than zsh: ls -d *$'\ue0'* ls -d *$'\xc3\xa0'*won't match it, because the shell lists the content of the current directory and applies the pattern against each file name and the file name is encoded as a\u300 which won't match. On zsh however and on macOS only, the shell internally converts those letters with combining accents to their precomposed form upon readdir() as if passing them through iconv -f UTF-8-MAC -t UTF-8. Its own internal zreaddir() wrapper around readdir() does return U+00E0 instead of aU+0300 which explains why echo *Γ * works there (and not echo *a*) and not elsewhere. The change was introduced in June 2014. See the discussion on the zsh mailing list for more details. The core of the problem is the discrepancy between the encoding used on user input and the one used to store (and list) file names in the file system. The problem is a lot worse in Korean where virtually every character has a precomposed and decomposed form, which explains why the zsh issue was raised by a Korean person initially. So zsh basically fixes Apple's poor choice of decomposed form in the file system so its completion and globs can be used, but unfortunately, that only applies to zsh, ls | grep Γ  or find . -name '*Γ *' still won't work.
I am trying to rename files that include the character "Γ ". I do the following : rename -v 's/Γ /a/g' *But it shows all the files as unchanged. Verbose mode shows the same thing. I tried to escape with \ but with no luck. How can I make the regex match this type of character ? EDIT The output of perl -V : Summary of my perl5 (revision 5 version 18 subversion 2) configuration: Platform: osname=darwin, osvers=16.0, archname=darwin-thread-multi-2level uname='darwin osx320.apple.com 16.0 darwin kernel version 15.0.0: wed jun 22 17:57:08 pdt 2016; root:xnu-3247.1.106.2.9~1development_x86_64 x86_64 ' config_args='-ds -e -Dprefix=/usr -Dccflags=-g -pipe -Dldflags= -Dman3ext=3pm -Duseithreads -Duseshrplib -Dinc_version_list=none -Dcc=cc' hint=recommended, useposix=true, d_sigaction=define useithreads=define, usemultiplicity=define useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-arch x86_64 -arch i386 -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -fstack-protector', optimize='-Os', cppflags='-g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -fstack-protector' ccversion='', gccversion='4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc -mmacosx-version-min=10.12.5', ldflags ='-arch x86_64 -arch i386 -fstack-protector' libpth=/usr/lib /usr/local/lib libs= perllibs= libc=, so=dylib, useshrplib=true, libperl=libperl.dylib gnulibc_version='' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=bundle, d_dlsymun=undef, ccdlflags=' ' cccdlflags=' ', lddlflags='-arch x86_64 -arch i386 -bundle -undefined dynamic_lookup -fstack-protector'Characteristics of this binary (from libperl): Compile-time options: HAS_TIMES MULTIPLICITY PERLIO_LAYERS PERL_DONT_CREATE_GVSV PERL_HASH_FUNC_ONE_AT_A_TIME_HARD PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP PERL_PRESERVE_IVUV PERL_SAWAMPERSAND USE_64_BIT_ALL USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES USE_LOCALE USE_LOCALE_COLLATE USE_LOCALE_CTYPE USE_LOCALE_NUMERIC USE_PERLIO USE_PERL_ATOF USE_REENTRANT_API Locally applied patches: /Library/Perl/Updates/<version> comes before system perl directories installprivlib and installarchlib points to the Updates directory Built under darwin Compiled at Feb 6 2017 22:16:22 @INC: /Library/Perl/5.18/darwin-thread-multi-2level /Library/Perl/5.18 /Network/Library/Perl/5.18/darwin-thread-multi-2level /Network/Library/Perl/5.18 /Library/Perl/Updates/5.18.2 /System/Library/Perl/5.18/darwin-thread-multi-2level /System/Library/Perl/5.18 /System/Library/Perl/Extras/5.18/darwin-thread-multi-2level /System/Library/Perl/Extras/5.18 .EDIT 2 : Output of locale : LANG= LC_COLLATE="C" LC_CTYPE="UTF-8" LC_MESSAGES="C" LC_MONETARY="C" LC_NUMERIC="C" LC_TIME="C" LC_ALL=SOLUTION Here's in a nutshell what worked. All the 3 solution did the job : rename -nv $'s/a\xcc\x80/a/g' * PERL_UNICODE=AS rename -n 's/\pM//g' ./*. (see explanations in chosen answer) Switching to zsh, instead of the default Shell of MacOS (bash), then my original command (without any need for specifying combining characters such as a\u300) worked : rename -v 's/Γ /a/g' *.If you're not satisfied with either of these solutions, please look at the chosen answer to find useful tips.
How to rename filenames with accents on macOS?
The problem was the permissions for / (the root directory) and the clue for finding that was this line from your strace output: access("/", R_OK|X_OK) = -1 EACCES (Permission denied)You were missing group read permission settings for /. But because you still had x (execute) permission, which allows you to traverse a directory, you could still access all of the files on the filesystem, which is why most everything continued working while those permissions were in effect. The only thing you were not allowed to do is list the contents of /. Most commands don't need to list /, they either use pathnames relative to the current directory or absolute pathnames that access specific well-known directories off the root (like /etc and /var). For security reasons, locate, even though it has access to a complete inventory of filenames generated by a privileged user, insists on reporting only results that the calling user would be able to find by scanning the whole filesystem from the root. Since you couldn't list /, which makes scanning anything straight from the root a non-starter, locate would report nothing at all.
I usually find the answers to all my Unix related problems already posted as questions and answers. However, this particular issue has had me stumped for the past hour so I thought I’d ask my first question on this site. Problem I have a development / staging server server running CentOS 5.11. Running locate as a regular user results in no output (not even an error message): locate readdirHowever, running the command as the superuser prints a list of valid results: $ sudo locate readdir /home/anthony/repos/php-src/TSRM/readdir.h /home/anthony/repos/php-src/ext/standard/tests/dir/readdir_basic.phpt ... etc.strace usually helps me debug any such issues and running strace locate readdir shows: stat64("/var/lib/mlocate/mlocate.db", 0xbff65398) = -1 EACCES (Permission denied) access("/", R_OK|X_OK) = -1 EACCES (Permission denied) exit_group(1) = ?Check permissions I checked the ownership and permissions of the locate binary and its default database. As expected the command is setgid with slocate as the group owner while the database has the appropriate ownership and permissions. $ ls -l /usr/bin/locate -rwx--s--x 1 root slocate 22280 Sep 3 2009 /usr/bin/locate$ sudo ls -l /var/lib/mlocate/mlocate.db -rw-r----- 1 root slocate 78395703 May 8 04:02 /var/lib/mlocate/mlocate.db$ sudo ls -ld /var/lib/mlocate/ drwxr-x--- 2 root slocate 4096 Sep 3 2009 /var/lib/mlocate/There are also no unusual file attributes: $ sudo lsattr /usr/bin/locate /var/lib/mlocate/mlocate.db ------------- /usr/bin/locate ------------- /var/lib/mlocate/mlocate.dbCompare with working system Meanwhile, everything works as expected on the Production server. Running locate readdir as a regular (non-root) user returns a list of results as it should: $ locate readdir /usr/include/php/TSRM/readdir.h /usr/lib/perl5/5.8.8/i386-linux-thread-multi/auto/POSIX/readdir.al /usr/share/man/man2/readdir.2.gzFor comparison, I also ran this command through strace but I then got the same permission denied error as on the staging server. I was wondering how this could be until I read the manual page for sudo. Listed in the Bugs section:Programs that use the setuid bit do not have effective user ID privileges while being traced.So, unfortunately, I can’t use strace for debugging. I compared the results of all the above commands between the Staging and Production servers and there’s no difference between them. Both systems have the mlocate-0.15-1.el5.2 RPM with no modifications to their files as shown by rpm -V mlocate. Other considerations I thought it might be related to the fact that on the problematic staging server, my login is authenticated using Winbind but I created a regular local user on the same box and I still have the same issue. There’s obviously something else that I’m missing but I simply don’t know what it is. I suspect it is related to the setgid file permission, maybe PAM or possibly SELinux. I don’t know much about either PAM or SELinux: I’ve only ever looked at PAM when configuring Winbind authentication while SELinux was installed with the OS but I’ve never used it. Note: the production server has been subject to far fewer modifications than the development server which has had some experimentation.
No output from locate command
I've not seen a way to incorporate these results into Nautilus, but there are GUIs for search in mlocate's database. The one that I'm most familiar with is called catfish. It's generally in most of the standard distros' repos. The main website is here, titled: Catfish is a versatile file searching tool.. The project's Launchpad site is another additional resource if needed. excerpt from websiteCatfish is a search GUI powered by locate and find behind the scenes, with autocompletion from Zeitgeist and locate. The advanced options allow filtering by date and file type. The interface is intentionally lightweight and simple, using only GTK+.Example SearchResultsAdvanced filteringReferencesGTK Frontend for locate
When searching for files, I often prefer using locate (because of the speed). However, I end up opening a terminal just for that purpose and then closing it again. Not a big problem for me, but my girlfriend often forgets command names. Is there a way to have Nautilus to use mlocate for searching? Ideally, I'd love to have the results displayed separately (because a file in locatedb may no longer exist), but I'm ok if it doesn't. Failing that, is there some GUI to locate?
Is there a way to have Nautilus include mlocate in the results?
Yes, you can use xargs for this. For example a simple: $ locate commands.cfg | xargs grep check_dns(When grep sees multiple files it searches in each one and enables filename printing along matches.) Or you can explicitly enable filename printing via: $ locate commands.cfg | xargs grep -H check_dns(Just in case one grep is called only with 1 argument by xargs) For programs that only accept one filename argument (unlike grep) you can restrict the number of supplied arguments like this: $ locate commands.cfg | xargs -n1 grep check_dnsThat does not print the names of files where matched lines are from. The result is equivalent to: $ locate commands.cfg | xargs grep -h check_dnsWith a modern locate/xargs you can also protect against whitespace issues: $ locate -0 commands.cfg | xargs -0 grep -H check_dns(By default whitespace separates input of xargs - which is of course a problem when your filenames contain whitespace ...)
I'm trying to find where check_dns is defined in nagios' commands.cfg file, although there are quite a few files. I know I could run something like find / -name "command.cfg" -exec grep check_dns {} \; to search for matches, but if possible I would like to use locate since it is an indexed copy and much faster. When I run locate commands.cfg I get the following results: /etc/nagios3/commands.cfg /etc/nagiosgrapher/nagios3/commands.cfg /usr/share/doc/nagios3-common/examples/commands.cfg /usr/share/doc/nagios3-common/examples/template-object/commands.cfg /usr/share/nagiosgrapher/debian/cfg/nagios3/commands.cfg /var/lib/ucf/cache/:etc:nagiosgrapher:nagios3:commands.cfgIs it possible to run locate and pipe it to an inline command like xargs or something so that I can grep each of the results? I realize this can be done with a for loop but I'm looking to pick up some bash-fu / shell-fu here more than how to do it for this specific case.
How can I act on the results of the "locate" command?
You type the command at the shell prompt. The shell processes what you typed, which includes globbing, substituting variables, substituting $() and so on. After processing what you typed, the shell executes the command. Quotes are needed if a string contains characters that are special to the shell, such as spaces or asterisks, but you don't want the shell to process them. You will get away without quoting an asterisk if there are no matching files in the current directory, but it's good practice to quote it anyway. It is important to understand that the --regex option has no effect on the shell's actions. First, the shell processes the command that you type. locate gets the result of that processing. a) If there are files in the current directory that match file*, the shell will replace file* with the list of those files before calling locate. If there is no match, the shell won't touch file*, and locate looks for files that are named file, filee, fileee and so on. In short, the shell attempts globbing, then locate performs a regular expression search if the shell's globbing results in correct syntax. b) The quotes tell the shell to leave the asterisk alone. locate will look for files that start with file. No regex search. c) The shell attempts globbing as in a). If there is no match, locate will look for files that start with file. No regex search. d) The shell leaves the expression alone. locate will perform a regex search and look for files that are named file, filee, fileee and so on.
To use locate command with regex , do we need to enclose the pattern in quotes along with passing --regex option ? If yes, then what do the following mean - a) locate --regex file* ? Here regex will happen or shell globbing ? b) locate 'file*' ? Will locate do regex search even though we did not passed --regex ? c) locate file* // I understand shell globbing will happen d) locate --regex 'file*' // I understand regex search will happen in Database file
Understanding locate command regex
The one that I'm most familiar with is called catfish. It's generally in most of the standard distros' repos. The main website is here, titled: Catfish is a versatile file searching tool.. The project's Launchpad site is another additional resource if needed. excerpt from websiteCatfish is a search GUI powered by locate and find behind the scenes, with autocompletion from Zeitgeist and locate. The advanced options allow filtering by date and file type. The interface is intentionally lightweight and simple, using only GTK+.Example SearchResultsAdvanced filtering
I am a heavy user of the locate tool, which is part of the findutils package. It's fine to use it on command line, but sometimes I would also like to search for a file (as fast as with locate) within my Xfce 4.10 desktop. Is there any nice GTK frontend (or a panel applet) for the locate command?
GTK Frontend for locate
locate -0 '*.txt' | xargs -r0 stat -c "%n %U" >>result.txtshould do the trick
I have to search for a specific file type on a storage unit and also want to know their owners. With locate '*.txt' >> result.txt I find all files I'm looking for but I'm missing the owner this way. Any suggestions on how I could do it properly?
Locating files and displaying their owner
You could try xmlstarlet to select if the path exists then output the filename: find . -name '*.xml' -exec xmlstarlet sel -t -i '/foo/bar/boom/bang' -f -n {} +
I'm working with XML files, each of which could be dozens of lines long. There are literally hundreds of these files, all over a directory structure. Yes, it is Magento. I need to find the file that has the <foo><bar><boom><bang> element. A <boom><bang> tag could be defined under other tags, so I need to search for the full path not just the end tag or tags. There could be dozens of lines between each tag, and other tags between them: <foo> <hello_world> ... 50 lines .... </hello_world> <bar> <giraffe> ... 50 lines .... </giraffe> <boom> <bang>Vital information here</bang> </boom> </bar> </foo>What is the elegant, *nix way of searching for the file that defines <foo><bar><boom><bang>? I'm currently on an up-to-date Debian-derived distro. This is my current solution, which is far from eloquent: $ grep -rA 100 foo * | grep -A 100 bar | grep -A 100 boom | grep bang | grep -E 'foo|bar|boom|bang'
Find XML file with specific path
locate is an easy way to search for a file quickly since it has it's own database. However, I always just use find(1). The results are returned to the user who ran it, and the user who ran it can only find files they have the appropriate file system permissions to. find searches recursively, so you can specify / as the search path if you want to search every filesystem. Finding all files and directories named foo: find / -name "foo"Finding only files named foo: find / -type f -name "foo"Finding only directories named foo: find / -type d -name "foo"There are a lot of useful options. Check out the man page.
In FreeBSD 12, on a freshly-created virtual machine (DigitalOcean), I tried to use the locate command. $ locate javaI received an error.locate: database too small: /var/db/locate.database Run /usr/libexec/locate.updatedbSo I ran locate.updatedb. $ /usr/libexec/locate.updatedbGot a message, complaining about permissions./usr/libexec/locate.updatedb: cannot create /var/db/locate.database: Permission deniedOkay. Run as sudo. $ sudo /usr/libexec/locate.updatedbI got a security warning.WARNING Executing updatedb as root. This WILL reveal all filenames on your machine to all login users, which is a security risk.Unix is so much fun. βž₯ What is the proper secure way to find a file or directory by name on your FreeBSD system?
Safe secure way to locate a file in FreeBSD?
Your glob will only match if the name starts with lua. Try this glob: locate '*lua*so*'
I'm on Ubuntu 11.04, where I have: $ locate --version mlocate 0.23.1 [...]The man locate page says:If --regex is not specified, PATTERNs can contain globbing characters. If any PATTERN contains no globbing characters, locate behaves as if the pattern were *PATTERN*.OK, so to do a little test: first, just searching for 'lua' works - but returns a ton (500+) of results: $ locate 'lua' | head -5 /boot/grub/hwmatch.lua /etc/alternatives/lua-compiler /etc/alternatives/lua-compiler-manual /etc/alternatives/lua-interpreter /etc/alternatives/lua-manual$ locate 'lua' | wc -l 560I want to search for .so files with lua in the filename, so I try this as an attempt to use a globbing pattern: $ locate 'lua*so*'Nothing, 0 results. So I'm trying with a regex: $ locate --regex 'lua.*so.*' | head -5 /usr/lib/libipelua.so.7.0.10 /usr/lib/liblua5.1.so /usr/lib/liblua5.1.so.0 /usr/lib/liblua5.1.so.0.0.0 /usr/lib/gtk-2.0/2.10.0/engines/libluaengine.soWell, this works - so it is good enough. But what puzzles me is this - if the man page says globbing is supported when not using regex, how should I format my glob pattern to have it work?
Getting globbing to work with `locate`?
locate will not work unless you have an up-to-date database. Try find . -type f -name configure instead, or issue an updatedb command first, then do the locate (make sure the current path isn't excluded) But first, you should always check the documentation - maybe the way to compile it does not use the configure mechanism in the first place.
I need to compile a package but the ./configure command does not work? I'm getting the following error:-bash ./configure : No such file or directoryWhere is that script? I used the locate command but it did not return anything.
"./configure" command does not work
Sounds like you want to search for flatpak in the file name only (and not in other path components), so you can use the -b/--basename option: So: locate -ib flatpakAnother approach could be to use the -r/--regex option and write: locate -ir 'flapak[^/]*$'That is flatpak followed by any number of characters other than / followed by the end of the file path. That might however miss filenames that have non-characters (in the current locale) after flatpak.
I want to search for arbitrary file/directory names, but only want to list file paths containing the search string at the same position once. Especially not every file within a directory matching the search string. Here is an example, locate -i flatpak lists: /etc/flatpak /etc/dbus-1/system.d/org.freedesktop.Flatpak.SystemHelper.conf /etc/flatpak/remotes.d /etc/profile.d/flatpak.sh /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/74 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/75 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/76 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/77 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/78 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/79 /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7a /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7b /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7c /home/simon/.cache/gnome-software/flatpak/installation-tmp/repo/objects/7d /var/lib/flatpak /var/lib/flatpak/.changed /var/lib/flatpak/.removed /var/lib/flatpak/app /var/lib/flatpak/appstream /var/lib/flatpak/exports /var/lib/flatpak/repo /var/lib/flatpak/runtimeBut I want a search result like this: /etc/flatpak /etc/dbus-1/system.d/org.freedesktop.Flatpak.SystemHelper.conf /etc/profile.d/flatpak.sh /home/simon/.cache/gnome-software/flatpak /var/lib/flatpakAnd which tool is best suited for this? locate, find, fd-find?
Search for arbitrary files but only list matches in results once
You could pipe the output into a grep -v command to exclude ghostscript | grep -v "ghostscript"
I am trying to locate all the files named hosts on a linux PC remotely. The problem is there are almost several thousand files with ghostscript as one of the upper directory names or as part of a directory name, so it's returning ALL of those directories. Is there a way to locate hosts, but exclude ghosts?
Locate but exclude names - Linux
Try this: locate -r "wordpress-seo$"Although i should mention that find offers huge variations of options over locate. You have found locate faster because it just reads from a database /var/lib/mlocate/mlocate.db while find search through files every time, whenever you give it something to search. locate's database is updated by cron on a daily basis, you can also update the database manually anytime by: sudo updatedbThis will make the files created after the daily cron update available in the locate database, so you will find those via locate. Also check the configuration file /etc/updatedb to see which filesystems, paths are being excluded.
I want to locate all folders on my server that end with 'wordpress-seo'. I tried find command but it takes too long. sudo find /home/w/s -type d -name 'wordpress-seo'Now I am trying locate command but it returns all paths that have wordpress-seo. a/wp-content/plugins/wordpress-seo a/wp-content/plugins/wordpress-seo/languages/ ... ...I want to exclude wordpress-seo/* files and folders. I just want folder names. i.e. a/wp-content/plugins/wordpress-seo b/wp-content/plugins/wordpress-seoTried regex without any luck. locate -r '/\w+wordpress\-seo/b' OR locate '/*/wordpress-seo/'Any Help??
Locate specific folders
The configuration is in the file /etc/updatedb.conf. It may look like this: # /etc/updatedb.conf: config file for mlocate# This file sets variables that are used by updatedb. # For more info, see the updatedb.conf(5) manpage.# Filesystems that are pruned from updatedb database PRUNEFS="9p afs anon_inodefs auto autofs bdev binfmt binfmt_misc ceph fuse.ceph cgroup cifs coda configfs cramfs cpuset debugfs devfs devpts devtmps ecryptfs eventpollfs exofs futexfs ftpfs fuse fusectl gfs gfs2 gpfs hostfs hugetlbfs inotifyfs iso9660 jffs2 lustre misc mqueue ncpfs nfs NFS nfs4 nfsd nnpfs ocfs ocfs2 pipefs proc ramfs rpc_pipefs securityfs selinuxfs sfs shfs smbfs sockfs spufs sshfs subfs supermount sysfs tmpfs ubifs udf usbfs vboxsf vperfctrfs"# Paths which are pruned from updatedb database PRUNEPATHS="/tmp /var/tmp /var/cache /var/lock /var/run /var/spool /mnt /cdrom /usr/tmp /proc /media /sys /.snapshots /var/run/media"# Folder names that are pruned from updatedb database PRUNENAMES = ".git .hg .svn .bzr .arch-ids {arch} CVS"# Skip bind mounts. # DISABLED for bnc#994663 and to avoid btrfs subvolume issues PRUNE_BIND_MOUNTS="no"You can exclude file system types, paths/folders and named folders as the documentation says. Please see the man page for details.
I had a look at man locate but couldn't find an answer to this. The updatedb command appears to index everything under /, but according to my experiment it didn't index a file at /media/mike/W10 D drive/nonsense_file. Am I to suppose that it excludes mounted media volumes/locations? Is this documented somewhere? Is there some way of choosing to include some of these locations?
Which parts of a Linux system does locate index or not index? [duplicate]
If your locate implementation understands the option -0: locate -0 PATTERN | xargs -0 ls -sdOtherwise: locate PATTERN | xargs -I {} ls -sdOf course you may want to vary the flags passed to ls, e.g. add -h to get β€œhuman-readable” sizes, add --color=auto to have special files in color, etc. If some of the files in the locate database have been removed since the database was generated, ls will print error messages. To hide them, add 2>/dev/null at the end of the command.
Is there anyway to display the size of each file next to it after executing the "locate" command?
How to display size of each file next to it after executing the "locate" command?
There are several ways to filter the output from locate. Method #1 - be explicit If you know you're looking for a particular version of the library, then just ask locate for it directly. $ locate libstdc++-3-libc6.2-2-2.10.0.so /usr/lib/libstdc++-3-libc6.2-2-2.10.0.soMethod #2 - grep If you're looking for glibc .so libraries then use grep to find only these results from locate. $ locate libstdc++ | grep ".so$" /usr/lib/libstdc++-3-libc6.2-2-2.10.0.so /usr/lib/gcc/x86_64-redhat-linux/4.4.4/libstdc++.so /usr/lib/gcc/x86_64-redhat-linux/4.4.4/32/libstdc++.soMethod #3 - only return the first 10 lines of the result If you're more interested in finding the first results, then use head to return only the first few. You can direct head to return different numbers of results using the -# switch (shorter but non-standard equivalent of -n #): Example $ locate glibc | head -8 /usr/lib64/glib-2.0/include/glibconfig.h /usr/sbin/glibc_post_upgrade.i686 /usr/sbin/glibc_post_upgrade.x86_64 /usr/share/aclocal/glibc2.m4 /usr/share/aclocal/glibc21.m4 /usr/share/doc/glibc-2.12 /usr/share/doc/glibc-common-2.12 /usr/share/doc/glibc-2.12/BUGS(note that it returns 8 lines, that's not necessarily the same as file paths as file paths can be made of several lines since the newline character is as valid as any in a file name).
I need to locate a library on my system and it's name is expected to be found a couple of times. I want to keep only the first occurrence. So far I've tried some methods for splitting strings by newlines but neither seem to work with locate's output. I'm not using find because I don't know where the lib may be in advance. Any other (better) way to handle this is also welcome.
Keep only first match from locate output
After investigation, I discovered the following option: sudo updatedb --debug-pruningThe output is lengthy, but ends with the following line: Skipping `/': bind mountIn fact, the root file system is a subvolume on a Btrfs partition. Apparently, plocate, as well as mlocate, not playing well with Btrfs subvolumes is a known issue.
I run Linux Mint 21. Currently the locate command always returns no results, and updatedb always returns immediately. The database is sized no more than a few kilobytes. I have verified that the package mlocate is installed. The updatedb command is a cascading symbolic link that ultimately resolves to /usr/sbin/updatedb.plocate, a native binary executable. The problem appears on two separate systems having no particular commonality other than both being x86 64-bit machines running the same distribution.
locate and updatedb do nothing in Linux Mint
There is no option for that functionality in the output from man locate on CentOS 6.5, at least. But, you could get pseudo-functionality by changing a search term. For example, locate cron might produce too much output, but locate '/var/log/cron' would limit the results to those items in the locate database that match the search terms. Or, a pipe would work: locate cron | grep '/var/log/' Otherwise, use find: find /path/to/search -name '*cron*' or similar.
How can I locate a file using locate in CentOS under a specific directory from terminal? Locate search the whole database!
How to locate a file in a directory
There are 3 choices that I'm familiar with.tracker recoll BeagleThis tutorial titled, The best Linux desktop search tools discusses these and a couple of others. Tracker Installation is a snap. $ apt-get install tracker tracker-utilsAfter installation it should start indexing your drive automatically. You can peek inside to see what it's up to using tracker-control: $ tracker-control Found 288 PIDs… Found process ID 2611 for 'tracker-store'Store: 17 Aug 2013, 11:57:51: βœ“ Store - Idle Miners: 17 Aug 2013, 11:57:51: βœ— Applications - Not running or is a disabled plugin 17 Aug 2013, 11:57:51: βœ— File System - Not running or is a disabled pluginOr you can use track-stats: $ tracker-stats | head -10 Statistics: mfo:Action = 1 mlo:LandmarkCategory = 15 mto:State = 6 mto:TransferMethod = 2 mtp:ScanType = 6 nao:Tag = 1 nco:AuthorizationStatus = 3 nco:Contact = 1 nco:Gender = 3You can reconfigure its preferences like so: $ tracker-preferencesYou can manually start up the miners like so: $ tracker-control -s Starting miners… βœ“ Applications βœ“ File SystemAnd then see what its up to: $ tracker-control -F Store: 17 Aug 2013, 12:13:29: βœ“ Store - Idle Miners: 17 Aug 2013, 12:13:29: 0% Applications - Initializing 17 Aug 2013, 12:13:29: 0% File System - Initializing Press Ctrl+C to end follow of Tracker state 17 Aug 2013, 12:13:29: βœ“ Store - Idle 17 Aug 2013, 12:13:39: 1% Applications - Crawling recursively directory 'file:///usr/share/applications' 17 Aug 2013, 12:13:39: 1% Applications - Crawling recursively directory 'file:///usr/share/desktop-directories' 17 Aug 2013, 12:13:39: 1% Applications - Crawling recursively directory 'file:///home/tammy/.local/share/applications' 17 Aug 2013, 12:13:39: 1% Applications - Crawling recursively directory 'file:///home/tammy/.local/share/desktop-directories' After content on the disk has been indexed you can search for it using either the GUI or the integrated search into Nautilus (Ctrl + f). It also provides a command line tool, tracker-search: $ tracker-search art Results: file:///home/tammy/Documents/ArtEdCurriculumElemFRS.odtA little more details: $ tracker-search -d art Results: cols:3 file:///home/tammy/Documents/ArtEdCurriculumElemFRS.odt application/vnd.oasis.opendocument.text http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#PaginatedTextDocumentYou can also invoke the GUI search tool, tracker-needle:Recoll I don't have an active setup of this one currently but there are screenshots on the website that show it in action. You can also peruse the online documentation for more information.
Just moved to Ubuntu 12.04 from Windows 7. Under Win 7 I use "Everything" to search files and directories, it can build the index database and update it once any file or directory changes. I'm very used to it so I want know if there is something similar under Ubuntu 12.04. Now my workaround is updatedb and locate, but I have to updatedb every time I want to search something. Also, the results are the absolute file paths, what if I want to know the details of the files?(Say, what should I do if I want to sort the results by created_time?οΌ‰ Is there any way that updatedb is automatically executed once I added a file on my disk? If not, are there any tools that can function like "Everything"?
How to monitor directory/file changes to rebuild index?
The only thing that locate.updatedb does is update the locate database. If you don't want to divulge the locations of sensitive files via that database, then you can wait for locate.updatedb to complete, and then run the rm command to remove the database: sudo rm /var/db/locate.database
After having established the locate command's database indexing file & directory names across my FreeBSD 12 system (as described in this related Question), I now regret doing so because of its emitted security warning:Executing updatedb as root. This WILL reveal all filenames on your machine to all login users, which is a security risk.βž₯ Is there a way to undo the effects of running sudo /usr/libexec/locate.updatedb?
Delete the "locate" database in FreeBSD
What is the specific reason you're using locate? This appears to do what you have asked for: find . -type f -name '*doc' -exec du -h "{}" \;That said, if you really do want do use a tool like locate or find and pass its input as parameters to another program, you can avail yourself of the NUL delimited output and input that some tools provide. locate and find both have an option (locate's -0 and find's -print0) which will allow you to have more programatically-friendly output, which xargs is designed to read with its -0 argument: find . -type f -name '*doc' -print0 | xargs -0 du -hlocate -0 '*doc' | xargs -0 du -h
When working with the output of commands such as locate which produce lists of paths in "human readable form" (i.e. without \ in front of spaces), how do you redirect their output to another command? The output of $ locate [something] produces paths with spaces in them, which inhibits other programs to use the paths in the case they contain spaces. For example, if I were to $ du -h `locate *.doc`this will produce an error on all files and directories that contain spaces. (wrapping the ticks in spaces does not work)
How to redirect a list of human readable paths to another command?
The easiest way is to write a simple shell script which combines locate and grep: Create a file somewhere in your $PATH (e.g. /usr/local/bin/clocate) with #!/bin/sh locate --regex "$1" | grep --color=auto "$1"then make it executable, e.g. chmod +x /usr/local/bin/clocateand use it like locate. The script does only accept one pattern. Another way is to use a shell function: If you are using bash as your shell, you can add to your $HOME/.bashrc the following line: clocate() { locate --regex "$1" | grep --color=auto "$1"; }You need to rerun bash or re-source your .bashrc before you can the new command. Please note the --regex option for locate. You need to write .* instead of * to match any number of characters.
I used locate binary many time to search something on my 1TB HDD. Most of the time, I got many result and I have to read each line to get what exactly I'm looking for. It would be great if the locate can output the matched pattern with color ( just like grep --color) Is there any way to do so for locate command ?
locate with color
Different distributions of linux have different tools to install packages, known as package managers - you need to use the right one for your distribution. Yum is the package manager for Red Hat systems. Instead, you need to use apt, Ubuntu's package manager. Try: sudo apt install net-tools locate That pattern should work for most packages on Ubuntu. net-tools is the package which contains ifconfig on Ubuntu. HOWEVER, ifconfig is well out of date and has been for several years. You should use ip, which should be installed already in Ubuntu.
In the Linux-Ubuntu terminal:ifconfig throws bash: ifconfig: command not found locate command does the same. sudo yum install net-tools throws: bash: yum: command not found as well, but I might also have made a spelling mistake there when I tested it.What does it mean, do I have to install ifconfig? Or are there alternative commands?
ifconfig and locate command not found, `bash: ifconfig: command not found`
The correct command will be locate '*' or locate "*" or locate \*. The * has to be quoted to avoid having the shell expand it to names present in the current working directory.
when updatedb is run it generates a mlocate.db file. How to list all the files in the mlocate.db file? locate *is this correct?
How to list all files in mlocate database file?
You should probably know that mlocate only does queries on the databases created by updatedb. If you want to change the default location of the databases created by updatedb you should pass the --output FILE option to updatedb and then do the query with locate --database FILE afterwards. You could do: $ sudo updatedb -o /var/db/foo.db $ locate -d /var/db/foo.db something
Looks like it's not configurable through /etc/updatedb.conf, and not mentioned in its manual as well. So can I change that?
Change where mlocate stores the database?
Make a backup copy of /var/lib/mlocate/mlocate.db now, before the mlocate updatedb cron job runs again. Dump mlocate.db to a text file: mlocate / | sort > /var/lib/mlocate/mlocate-old.txt Update your your mlocate.db. How to do this varies slightly according to what kind of unix clone or linux distribution you're using. e.g. on a Debian box, run /etc/cron.daily/mlocate, or just updatedb.mlocate. Dump the new mlocate.db to a file: mlocate / | sort > /var/lib/mlocate/mlocate-new.txt. See the changes with, e.g., diff -u /var/lib/mlocate/mlocate-{old,new}.txt. The output is likely to be huge, so redirect to a file or pipe into less.
I ran rm -rf on /var/cache/lxc, not realizing it was full of symlinks. I've lost a bunch of files, including most of /dev. I have a mlocate.db from 16 hours ago. How do I compare the list of files from mlocate.db to what still exists to get a complete list of what is missing? locate -e says it will give me files that still exist, I basically need the opposite. edit: Thank you cas. Took me a while, but I finally found the problem: #mount | grep /var/cache/lxc devtmpfs on /var/cache/lxc/fedora/x86_64/bootstrap/dev type devtmpfs (rw,nosuid,seclabel,size=74173740k,nr_inodes=18543435,mode=755) proc on /var/cache/lxc/fedora/x86_64/bootstrap/proc type proc (rw,relatime) proc on /var/cache/lxc/yakkety/rootfs-amd64/proc type proc (rw,relatime)
How do I compare mlocate.db to what exists now?
Your commands also outputs files the current user doesn't have the permission to access. Slightly shorter solution would be locate -0b '\python' | perl -0nE 'say if -f'but it doesn't print the non-accessible files. You can use bash to loop over the files, too, but it's a bit more verbose: locate -0b '\python' | while IFS= read -d '' -r f ; do [[ -f $f ]] && printf '%s\n' "$f" done
I have the following version of locate: $ locate --version mlocate 0.26 Copyright (C) 2007 Red Hat, Inc. All rights reserved. This software is distributed under the GPL v.2.This program is provided with NO WARRANTY, to the extent permitted by law.I am trying to find all files (not directories) that have some specific basename, e.g. python, so I've tried the following: $ xargs -a <(locate -b '\python') -I{} file {} | sed -E '/directory|symbolic/d;s/:.*$//g'This prints exactly the expected output. However, I wonder if there is an efficient way to achieve that instead?
mlocate: how to print files only [duplicate]
The second ~ isn’t being expanded; try locate -d "${HOME}/.a_locate.db:${HOME}/.b_locate.db:" -Ai file_to_findinstead, or, since this is zsh, just locate -d $HOME/.a_locate.db:$HOME/.b_locate.db: -Ai file_to_findThe reason is that ~/ is only expanded at the beginning of a shell word. A shell word only ends at whitespace (as far as it matters here β€”the actual rules are much more complicated). ~/foo:~/bar is a single word, which begins with ~/ so the leading ~ is expanded to your home directory, but the middle ~ is nothing special so it stays a tilde. There's an exception on the right-hand side of an assignment: in PATH=~/foo:~/bar, ~/ is expanded after the = assignment sign and after a : on the right-hand side.
I want to pass multiple DB files to the locate command, like this: locate -d ~/.a_locate.db:~/.b_locate.db: -Ai file_to_findBut this gives me this error: locate: can not stat () `~/.b_locate.db': No such file or directoryThe man page for locate says:-d, --database DBPATH Replace the default database with DBPATH. DBPATH is a :-separated list of database file names. If more than one --database option is specified, the resulting path is a concatenation of the separate paths.I don't clearly understand what is meant by 'concatenation of separate', What am I doing wrong? I tried giving the full path (/home/user/.b_locate.db) and it worked. Can someone explain this behaviour? (I'm using mlocate package in Arch linux)
How to pass multiple DB files to locate?
These flags don’t change the configuration file; they affect the invocation of updatedb they are attached to, ignoring the configuration file. Thus sudo updatedb --prune-bind-mounts noruns updatedb with PRUNE_BIND_MOUNTS set to no, regardless of the value set in the configuration file. If you want to change /etc/updatedb.conf, edit it.
Based on the man page of the command "updatedb", we can change and override the configuration of /etc/updatedb.conf using the below commands: --prune-bind-mounts FLAG Set PRUNE_BIND_MOUNTS to FLAG, overriding the configuration file. --prunefs FS Set PRUNEFS to FS, overriding the configuration file. --prunenames NAMES Set PRUNENAMES to NAMES, overriding the configuration file. --prunepaths PATHS Set PRUNEPATHS to PATHS, overriding the configuration file.But when I try to use them, there is no change. For example I expect the command below to change the flag to "no", but nothing happens: sudo updatedb --prune-bind-mounts noJust it takes a while to execute and exits without any warnings or errors, when I check the /etc/updatedb.conf content, it's the same as before.
How can I change the configuration of /etc/updatedb.conf file?
There are many problems with your commands. First, locate *.c only looks for files matching *.c if you run it in a directory that doesn't contain any file whose name matches *.c. Otherwise the shell expands *.c to the list of matching files. That's probably not happening, otherwise you'd get a lot fewer matches, but leaving unquoted globs like this is a bad habit because it will bite you one day. (It's a frequent topic on this site.) The same goes for find -name *.c. Instead, write locate '*.c' … find / -name '*.c' …or something similar. There are some common why locate and find might give different results. They don't seem to apply in your case since you're getting the same number of hits, but once again this is something you need to be aware of.locate results are cached from the last run of updatedb. This usually runs once at night. find results calculated each time you run the command. Depending on the system, on which locate implementation you have and on how it's configured, it may let you see only publicly accessible files (e.g. GNU findutils, rather than mlocate or slocate), or it may make an approximation of the files that you're allowed to access (e.g. because there's a complex setup involving Linux security modules that distinguish between applications trying to access the file). The pattern *SUFFIX mean the same thing to locate and to find -name (assuming that SUFFIX doesn't contain slashes or wildcards), but other patterns don't. For example locate foo is equivalent to find / -name '*foo*', not to find / -name 'foo'.Another thing that might, but probably doesn't, cause problems is that you've piped error messages from find into the data processing part of your command. You strip out lines containing Permission denied, which causes you to miss files containing this as part of their name (ok, you probably don't have any), and causes any error message that doesn't contain Permission denied to be interpreted as an input line. It is rarely a good idea to mix data output with error output, and it's absurd here. If you want to ignore errors, redirect them to /dev/null: find … 2>/dev/null | …What is definitely biting you is that xargs expects an input syntax that's different from what find produces. In the input of xargs, any whitespace separates items, not just line breaks. The three characters \'" are also parsed specially. Spaces are common in file names and all other characters are permitted apart from / and from null bytes. One of the lines that xargs receives as input is /usr/lib/python2.7/site-packages/setuptools/script template (dev).pyFor xargs, that's three items: /usr/lib/python2.7/site-packages/setuptools/script, template and (dev).py. The reason for the error messages from wc should now be clear. There are several solutions for this. One is to use the null-delimited format for find and xargs. This works with any file name, even file names containing newlines (which are permitted, but uncommon). find / -name '*.py' -print0 | xargs -0 wc -l | tail -3Another is to forget about the problematic xargs and make find invoke the command directly. find / -name '*.py' -exec wc -l {} + | tail -3The first solution may be applicable to your locate implementation, check if it has a -0 option. The second solution is specific to find. If you're stuck with newline-delimited output from locate, and you have the GNU version of xargs, then you can use -d '\n' to make it parse the input as newline-delimited without any form of quoting. locate '*.py' | xargs -d '\n' wc -l | tail -3This was your main problem. An additional problem is that there's a maximum length to the command line. The xargs command (or the -exec … {} + action of find) puts as many file names as it can on a command line, and if they don't all fit, then the command (here, wc -l) is executed multiple times, once for each batch of files. With tail -3, you're only seeing the last two files and the total for the last batch (assuming that there are at least two files in the last batch). The files in the previous batches are not reflected in this output. Since find and locate may not report files in the same order, you may see different results. How to solve the maximum length problem depends on what you want to do with the data. If all you want is grand totals, then one way (assuming no newlines in file names) is to count all total lines. … | xargs -d '\n' wc -l | awk '/^[0-9]+\ttotal$/ {total += $1} END {print total}'
I am having trouble understanding why find and locate would work differently for C and Python source files. My goal is to count the number source files and the sum of their source code lines for a given language. I used both find and locate to compare outputs (updatedb was just run prior to this with sudo to make sure locate reports current results). For C files this works as expected, the number of source files is the same $ find / -name *.c |& grep -v "Permission denied" | wc -l 1056 $ locate *.c | wc -l 1056Using xargs, the sum of source code lines also come up the same. $ locate *.c | xargs wc -l | tail -3 138 /usr/src/kernels/3.10.0-693.el7.ppc64/scripts/selinux/genheaders/genheaders.c 147 /usr/src/kernels/3.10.0-693.el7.ppc64/scripts/selinux/mdp/mdp.c 705376 total$ find / -name *.c |& grep -v "Permission denied" | xargs wc -l | tail -3 2994 /opt/Python-3.6.2/Objects/listobject.c 821 /opt/Python-3.6.2/Objects/bytes_methods.c 705376 totalJust to test, this also works for files with a .java extension - I get the same consistent results. However, when I repeat the same with for Python files (ie. .py extension) Source file number matches. $ find / -name *.py |& grep -v "Permission denied" | wc -l 9249 $ locate *.py | wc -l 9249But the sum of lines of code for Python files gives very different results. $ locate *.py | xargs wc -l | tail -3 wc: /usr/lib/python2.7/site-packages/setuptools/script: No such file or directory wc: template: No such file or directory wc: (dev).py: No such file or directory wc: /usr/lib/python2.7/site-packages/setuptools/script: No such file or directory wc: template.py: No such file or directory 220 /usr/src/kernels/3.10.0-693.el7.ppc64/scripts/rt-tester/rt-tester.py 129 /usr/src/kernels/3.10.0-693.el7.ppc64/scripts/tracing/draw_functrace.py 753350 total$ find / -name *.py |& grep -v "Permission denied" | xargs wc -l | tail -3 wc: /usr/lib/python2.7/site-packages/setuptools/script: No such file or directory wc: template: No such file or directory wc: (dev).py: No such file or directory wc: /usr/lib/python2.7/site-packages/setuptools/script: No such file or directory wc: template.py: No such file or directory 1919 /opt/Python-3.6.2/python-gdb.py 69 /opt/Python-3.6.2/python-config.py 1034101 totalCan someone explain why this is the case? What's so different about Python files (I can't really believe it has to do with the file type, but I'm stumped). What am I missing here? Same odd results under Ubuntu and RH I run updatedb with sudo, but I'm running all of these command as a regular user.
When counting source files and LOC with locate and find - why do Python files come up different?
I looked and did not find any offering that provided just a web app interface to an existing slocate database file. So you have the following options:Roll your own. Shouldn't be too difficult use a CGI based approach which would allow users to search for entries in your pre-built slocate database file. Skip using the slocate database file and use a dedicated search engine such as one of the following that includes both a crawler and a web frontend:OpenSearchServer Hyper Estraier Recoll + Recoll-WebUI Wumpus Search Engine
I would very much like to allow users in a small office environment harness the power of slocate indexed database on the file server. Currently when users are looking for a file in our fileserver, they need to run find from their Windows workstations on the network shares that are available from the server. This loads up the server while other are working. Alternatively, I could set the indexers in every workstation to index the server locations. This is not ideal either, as the server would again be loaded a task that must be run multiple times a day on the same set of data! Ideally, the file server will carry out its own indexing and my users (who are oblivious to Linux and its command-line) will be able to log on to a simple website on the file server and run a search in much the same way I run locate commands in the command line. Is there something available?
is there a web app for returning results to a search on an indexed database?
I don't know Linux on Windows, nor the GNU version of locate, but you should be able to do what you want. There is a longer version of the manual here. Replace your single updatedb --localpaths="/a /b" by 2 commands, updatedb --localpaths="/b" if [ -d "/a" ]; then updatedb --localpaths="/a" --output=/dir/mydb; fiwhere /dir/mydb is the full pathname of the file you want to hold the database in. When you do a locate, set the environment variable LOCATE_PATH to /dir/mydb::. In principle :: should mean use the standard db. If :: doesnt work, you may be able to get the filename of the standard db by running updatedb --help. It might say, for example, the default is /usr/local/var/locatedb. You can then set LOCATE_PATH=/dir/mydb:/usr/local/var/locatedb . You can also use the -d option to locate to provide this list of dbs.
I run updatedb like this: updatedb --localpaths="/a /b" /a is a removable drive. /b is the local hard drive. Although /a's not always accessible to me, I frequently want to run locate to find if I have a certain file on it (based on the last time I ran updatedb). The problem is, if I run updatedb when it's not plugged in, I get an error: /usr/bin/find: '/a': No such file or directoryThe database gets the latest information about /b, but it removes /a's existing data. Is there a way to keep /a's data when /a isn't plugged in during updatedb? I think this might be possible with multiple databases, one for /a and another for /b. Then a script can check whether or not /a is plugged in when it decides whether or not to updatedb. But the man page for both commands kind of assumes I know a lot more than I do (e.g., what FINDOPTIONS does), so I'm hoping there's an easier solution to this problem.
Can updatedb keep localpaths for removable drives that aren't plugged in?
locate is very versatile can take -r and a regexp pattern, so you can do lots of sophisticated matching. For example, to match directories a a0 a1 and so on use '/a[0-9]*/'. This will only show directories with files in them since you need the second / in the path. To match the directory alone use $ to anchor the pattern to the end of the path, '/a[0-9]*$'. Note, there are at least 2 versions of the locate command, one from GNU, and one from Redhat (known as mlocate). Use --version to find which you have. They differ slightly in the regex style. For example, if we change the above pattern '/a[0-9]*$' to use + instead of * to avoid matching a on its own, then mlocate needs \+ and gnu just +. For example, to match a directory a and all underneath it you might use for both versions locate -r '/a\(/\|$\)'For mlocate you might prefex --regex which uses extended syntax locate --regex '/a(/|$)'To do the same for gnu locate you would need to add option --regextype egrep, for example.
Is there a CLI tool similar to gnome-search-tool? I'm using locate, but I'd prefer that it grouped results where directory name is matched. I get a lot of results where the path is matched which is not what I want: /tmp/dir_match/xyz /tmp/dir_match/xyz2/xyz3It needs to be fast and thus use a search index.
Simple CLI tool for searching
locate is not dependable for live, current information about what files are present on your system. Information is cached in a database. Also consider the famous line, with link:It's not working! Should I blame caching?For actual current information on what files/directories exist on your box right now, use ls or find or stat or test -e filename && echo it is there or even printf %s\\n *. Pretty much anything except locate will give you up-to-date information about your filesystem. See also LESS=+/BUGS man locate which (on my system) reads in part:BUGS The locate program may fail to list some files that are present, or may list files that have been removed from the system. This is because locate only reports files that are present in the database...You can run updatedb, but honestly if you know exactly where the files are and you are using locate to find them...you are simply doing it wrong. locate tells you a path. It tells you nothing about the existence or nonexistence of files at that path. If you already know the path to the file, you don't need locate, do you? The purpose of locate is to "find filenames quickly", not necessarily accurately or dependably.Note: I'm not saying "don't use locate." It does have a purpose, when you have no idea where on your system a certain file might be. But once you get the pathname from locate, it has served its purpose and you now need to use other tools to examine/verify/etc. the file you've found.
So basically, I'm trying to delete the files: /var/lib/mysql/db/nomNomina2.*and when I look for them with locate, I get the following output: /var/lib/mysql/db/nomNomina2.MYD /var/lib/mysql/db/nomNomina2.MYI /var/lib/mysql/db/nomNomina2.frmbut then I try to $ rm -fv /var/lib/mysql/db/nomNomina2.frmI get no output, but the files still show when using locate. Notice that I can create and delete a file with the same filename in the same location, but it will still show when using locate, and I won't be able to create another table with the same name. Any ideas what could be causing this? filesystem mess? how to correct it?
Cannot delete file, but can delete parent directory
Gah. [jake@jace]/bin% ls -lhd /bin lrwxrwxrwx. 1 root root 7 May 22 2012 /bin -> usr/bin/I'm running Fedora 17. Apparently /bin is symlinked to /usr/bin. And of course (and quite rightly) find and locate ignore symlinked directories to avoid result pollution.
What gives? Normal find and locate commands don't turn up the verify program that lives at /bin/verify. In fact, it seems they don't turn up anything that lives in /bin [jake@jace]/bin% "find" /bin/ -iname "verify" 2>/dev/null /bin/verify [jake@jace]/bin% "find" /bin -iname "verify" 2>/dev/null [jake@jace]/bin% "find" / -iname "verify" 2>/dev/null /home/jake/android/cts/tools/vm-tests-tf/src/dot/junit/verify /usr/share/cmake/Modules/FortranCInterface/Verify /usr/bin/verify. [jake@jace]/bin% locate "verify" | grep "bin" /usr/bin/db_log_verify /usr/bin/db_verify /usr/bin/fprintd-verify /usr/bin/json_verify /usr/bin/ldns-verify-zone /usr/bin/rpmverify /usr/bin/verify /usr/bin/verifytree. [jake@jace]/bin% "ls" -lh /bin/verify -rwxr-xr-x. 1 root root 32K May 22 2012 /bin/verify
Why don't find and locate search /bin?
TL:DR: if you're using btrfs, do the following:edit /etc/updatedb.conf replace PRUNE_BIND_MOUNTS = "yes" with PRUNE_BIND_MOUNTS = "no" save the file update the db with sudo updatedb test again with locate home to see any outputs from home directory.Explanation I contacted plocate author and he kindly sent me to the right direction with the following reply:90% of these issues seem to be btrfs misconfigurations. You don't say which filesystem you're running, but if so, see the updatedb.conf man page under PRUNE_BIND_MOUNTS and check your fstab.I'm indeed using btrfs (to check: lsblk -f). Consulting man updatedb.conf gives the following:PRUNE_BIND_MOUNTS One of the strings 0, no, 1 or yes. If PRUNE_BIND_MOUNTS is 1 or yes, bind mounts are not scanned by updatedb(8). All file systems mounted in the subtree of a bind mount are skipped as well, even if they are not bind mounts. As an exception, bind mounts of a directory on itself are not skipped. Note that Btrfs subvolume mounts are handled internally in the kernel as bind mounts (see btrfs-subvolume(8)), and thus, may get skipped if you have also mounted the filesystem root itself. To counteract this, make your root directory a Btrfs subvolume, too. By default, bind mounts are not skipped.For solution, please see the above TL;DR.
Currently I'm trying to migrate from mlocate to plocate. I'm using mlocate in my script (note: locate is aliased to either mlocate or plocate depending which one I've installed). Successfully searched home directory with mlocate Below is the first 10 outputs after running locate home (locate is aliased to mlocate): /home /etc/apparmor.d/tunables/home /etc/apparmor.d/tunables/home.d /etc/apparmor.d/tunables/home.d/site.local /etc/systemd/homed.conf /home/username /home/username/.bash_history /home/username/.bash_profile /home/username/.bashrc /home/username/.cache /home/username/.cargoAs you can see, I could successfully find files in my home directories with mlocate. Unsuccessfully searched home directory with plocate: However, after installing plocate, I get results from /etc/, /usr/ etc, and all I get is one /home: After running locate home (locate is aliased to plocate): /home /etc/apparmor.d/tunables/home /etc/apparmor.d/tunables/home.d /etc/apparmor.d/tunables/home.d/site.local /etc/systemd/homed.conf /usr/bin/addgnupghome /usr/bin/homectlAs you can see, plocate couldn't find files and directories in my home directory. What I've tried 1. Comment from author of plocate This manjaro thread How to use plocate? has the author of plocate commented as below:First, check that the database has been updated recently. Most users will want to use plocate’s updatedb; plocate-build (which converts from mlocate’s database) is generally not what you want since plocate 1.1.0. [...] The other reason why a file isn’t shown, is typically permissions. Check if you can find the files as root (sudo plocate test); if you can, the problem is most likely that you don’t have access rights to the directory all the way down from the root. plocate should find anything that find / -name test does, but no more.I've run sudo updatedb. My home dir access right: / ο ½ ➜ ll drwxr-xr-x - root 17 Jul 2022 home/ drwxr-xr-x - root 16 Apr 21:44 usr//home ο ½ ➜ ll drwxr-xr-x - username 16 Apr 21:51 username/It does seem that plocate couldn't access files and directories under my home dir, but it has the same access right as usr. I don't understand why plocate can see usr but not files and dir under home. 2. Results are different when running as sudo The results are different when I run plocate as sudo. Below is regular locate: ➜  locate ranger /usr/bin/ranger /usr/lib/python3.10/site-packages/ranger /usr/lib/python3.10/site-packages/ranger_fm-1.9.3-py3.10.egg-info /usr/lib/python3.10/site-packages/ranger/__init__.py /usr/lib/python3.10/site-packages/ranger/__pycache__Below run as sudo: ➜  sudo locate ranger /root/.config/ranger /root/.local/share/ranger /root/.local/share/ranger/bookmarks /root/.local/share/ranger/history /root/.local/share/ranger/tagged /usr/bin/ranger /usr/lib/python3.10/site-packages/ranger /usr/lib/python3.10/site-packages/ranger_fm-1.9.3-py3.10.egg-infoI can see results in my home directories (i.e., first 5 results). What I want I'd like to be able to search my home directory with plocate the way I can do so with mlocate. In other words, I expect results from home directory when using plocate: $ locate home /home /etc/apparmor.d/tunables/home /etc/apparmor.d/tunables/home.d /etc/apparmor.d/tunables/home.d/site.local /etc/systemd/homed.conf /home/username /home/username/.bash_history /home/username/.bash_profile /home/username/.bashrc /home/username/.cache /home/username/.cargo
`plocate` couldn't find results in my home dir but `mlocate` could. How to search results in home dir?
I am really surprised by that other post you mentioned, as it can be very misleading. Just because an alias doesn't use parameters doesn't mean that aliases cannot set parameters. Of course you can put options in an alias, but it is just restricted, meaning, the alias is replaced in one place. $ alias ls='ls -l' $ ls # will run: ls -l $ ls foo # will run: ls -l fooThe problem that other question poses is if you want to add options to a alias that has a argument. so if you had an alias: $ alias movetotrash='mv ~/Trash' # no way to use inject anything inside 'mv' and '~/Trash'So in your case $ alias locate='locate -i -A' # will expand to 'locate -i -A' and then whatever else you type.As for your specific questions: It is quite common for linux distributions to ship with default "options" via aliases, for example, ls frequently has a alias ls 'ls --color=auto' or for root login you might see alias mv 'mv -i'. So it can be considered a standard way of providing better defaults to users and uses the same name of the underlying binary. If a user doesn't want to use an alias and it is set in the standard environment, they can use unalias to unset the alias permanently, or when running a command using a backslash, such as \mv a b will prevent alias expansion for that execution (as does using the full path, like /usr/bin/mv a b) I don't believe that zsh provides any extra capabilities in this area, and certainly nothing that would be a "standard". People often write wrapper shell scripts and sometimes shell functions. But for trivial software, alias is often the solution people use. If the program is complicated enough, it will usually gain an rc file for common user preferences. I think one tool that tried to make options a bit easier was the popt library, which allowed users to create their own options to software, but the popt library isn't widely used, and I don't think it had the ability to set the default.
For context, I'm using zsh. Every time I use locate, I want to pass the -i and -A flags. Usually, if I can get away with it, I create an alias with the same name as the existing command to do this. But according to this question, aliases can't accept arguments, so I have to use a function instead. Usually I stop there because the idea of a function with the same name as a command feels wrong to me, though I can't say why. I was about to finally create such a function when I had this thought: this is a common pattern for me, wanting to default flags for command; is there an easier way of going about it? Perhaps zsh provides a better solution to this problem? That brought me to another thought: is it an anti-pattern to override an existing command? I've always done it because it allows me to skip an association in my head: e.g., "Why doesn't ll have a man page? Oh yeah: ll really means ls -la. I need to do man ls, not man ll. Etc." To summarize:Is it alright/idiomatic to override an existing command with an alias/function? Does zsh or some other tool provide a more direct way to default flags for a specific command?
How can I/Should I default flags when running a command?
Kasa's answer pointed me to the right direction: these are two different programmes. The Linux locate is actually mlocate, and can be installed on Mac with some struggles, as some people have done, while the Mac locate is a different and older version. More about the difference in this answer. However, the easier way is to use mdfind, which is already installed on Mac (it's probably the thing behind Spotlight) and follows the same principles and seems to be better than the old BSD locate. This allows for non-case-sensitive queries and for basename-only queries as I requested.
I often use locate as a command to find stuff system-wide. However, I find it very annoying that the osx version of locate doesn't seem to have the -b option to match the basename only; and so prints all the content of every folder that matches the query. It also doesn't have the -e option for checking whether the files have been removed since the last update of the database. On my linux machine (Ubuntu 20.04), those options, and many others, exist. The man page for it shows the date Sep 2012, while the Mac shows August 2006. Are these two entirely different programmes or different versions of the same? How do I get the better locate on Mac?
`locate` in linux vs osx
The updatedb command will scan the filesystems on your system and create an index of the names of the available files and directories. This indexing is performed as a non-privileged user. This means that the index will only ever contain the names of files that are accessible by all the system's users. Since your home directory is only accessible to yourself (you say in comments that you have rwx------ permissions on it), this means that it will not be indexed by updatedb. This in turn means that locate will never return names from within your home directory (using sudo locate instead of just locate will still query the same index, so that won't help). To solve this, you have two options:Loosen up the restrictions to your home directory (and to any directory beneath that that you want to be indexed by updatedb). The permissions should probably read rwxr-xr-x, or 755 in octal. Don't use locate to find files. Instead use find: find "$HOME" -name test.txtThis would look for anything called test.txt in or under your home directory.
System: Linux Mint 19.1 Tessa, edition: Cinnamon Got a problem with a locate command. I created test.txt file on a desktop. After that I did: sudo updatedbHowever locate test.txt -i still doesn't show anything. Permissions to mlocate.db: -rw-r---- Working on normal user, not root (that's why I was using sudo command)
Locate doesn't work
Depends on the locate implementation and configuration. On my Ubuntu 16.04, the default configuration skips a few things: $ cat /etc/updatedb.conf PRUNE_BIND_MOUNTS="yes" # PRUNENAMES=".git .bzr .hg .svn" PRUNEPATHS="/tmp /var/spool /media /home/.ecryptfs /var/lib/schroot" PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre tmpfs usbfs udf fuse.glusterfs fuse.sshfs curlftpfs ecryptfs fusesmb devtmpfs"This configuration skips bind mounts, the /tmp, /media, etc. directories, and various filesystems.
Does the locate database not save certain files? Like are there files that excluded from the database by default?
Do some file get excluded from being saved in the locate database?
From man locate:To search for a file named exactly NAME (not *NAME*), use locate -b '\NAME'locate -b '\java'-b - Match only the base name against the specified patterns.
There are multiple installation of Java on my system, some silently installed by IDEs, and I wanted to find out where they are. So I thought to use locate to find them. My first try of locate javahad several thousand hits finding .*java.*. Is there a way to restrict the locate to just find files with and exact name? Not path containing Java. Not files who containing Java as part of their name. PS: I had similar problem before so please ignore the Java part and treat this as a problem of finding files. It could just as well be a question of finding all occurrences of gcc.
how to find occurrences of a file using locate
That's what functions or scripts are for. e.g. myfind() { search="$1" shift find . -iname "$search" "$@" }The "$@" on the end allows you to still specify other find options if you want to. Note, however, that find is quite fussy about the order of some options, some options only work if they come before the path (. in this case). Some cause find to whinge about them being after a non-option argument. e.g. find . -iname "*.txt" -maxdepth 1 causes the following whinge:find: warning: you have specified the -maxdepth option after a non-option argument -iname, but options are not positional (-maxdepth affects tests specified before it as well as those specified after it). Please specify options before other arguments.AFAIK, there's no option to turn off these warnings...or if there is, it's not called --quiet or --silent. Anyway, save the function above in a text file, e.g. myfind.txt. Then source it with: . myfind.txt. If you want it to be defined on every login, add the function definition to your ~/.profile or ~/.bash_profile, or source myfind.txt from them. This assumes you are using a Bourne shell like sh or bash. Note that some versions of sh may require the keyword function before myfind - e.g. function myfind() { ... }. If you are using csh or tcsh, then...well...don't, switch to a Bourne shell.
Is there any alternative out there for find similar to ag and ack? I'm really tired of having to type: find some/app -iname "some_file*"I much rather just type: find "some_file*"And have it search for the current folder and recursively in all subfolders.
Shorthand alternative to find
Try --regex (without a 'p') rather than -r (aka --regexp). This tells locate to use extended regexps rather than basic. locate --regex "^/var/lib/tomcat[0-9]{1,2}/" -l 10 alternatively, escape { and } with \ to make them special in basic regex.
This output is rather self-explanatory: XXXXX@debianvirtualbox:~$ locate -r "^/var/lib/tomcat[0-9]/.*" -l 10 /var/lib/tomcat8/conf /var/lib/tomcat8/lib /var/lib/tomcat8/logs /var/lib/tomcat8/webapps /var/lib/tomcat8/work /var/lib/tomcat8/webapps/ROOT /var/lib/tomcat8/webapps/websight /var/lib/tomcat8/webapps/ROOT/META-INF /var/lib/tomcat8/webapps/ROOT/index.html /var/lib/tomcat8/webapps/ROOT/META-INF/context.xml jakub@maredadebianvirtualbox:~$ locate -r "^/var/lib/tomcat[0-9]{1,2}/.*" -l 10 XXXXX@debianvirtualbox:~$I am trying to list first ten (-l 10) entries matching tomcat installation directory. If I just use [0-9] it properly matches tomcat8, however if I add a quantifier [0-9]{1,2} it matches nothing. Same goes for quantifiers + and ?, however * seems to work fine and so does this expression: ^/var/lib/tomcat[0-9][0-9]*/.*Why this happens and what's a good workaround?
Regex quantifiers are not working well with locate
Often, GNU grep and BSD competition is just pretty slow. People like ag (aka the_silver_searcher), rg (aka ripgrep) or ack; they don't try to build an index of the text, they just search it anew for every query, but in a more efficient manner than grep. I'm using (mostly) rg these days, and it really makes searching the complete Linux source tree quite manageable (a "search every file, even if not a C header" rg FOOBAR takes ~3s when I've warmed the filesystem caches; GNU grep takes > 10s). There's also full-text search engines (mostly, xapian), which I use as plugins on my IMAP server to speed up full-text searching. That's the only use case where this has proven to actually make a difference to me. (Ceterum censeo mandbem esse delendam; our search tools are so fast that taking 30s to rebuild a friggin index of 190 MB of man pages is simply not acceptable; and the idea that gzip is good compressor for really uniform data such as man pages where there's one compression dictionary that would make these things incredibly small is another annoyance of me. But things are intertwined enough that I can't be moved to get rid of mandb.)
locate (or rather, updatedb) is somewhat simple: it takes the output of find for the required paths (usually '/'), sorts it, and then compresses it with a front-compression tool (frcode), in which the consecutive common prefixes are replaced by number of repeated characters. So I'm wondering, what's stopping anyone from creating something similar for full text search? Say, how about concatenating every file in the system, sorting every line with the format line:filename:linenumber, and doing front-compression? I guess you would end up with a faster grep, with the tradeoff of being outdated until the daily/weekly cron job runs, just like locate. Maybe locategrep would be overkill for the entire system, but I can see it being useful to speed up a large project which won't change much for the rest of the day. Does something like this exists already or is it trivial to implement with some known tools? Note: I would rather avoid enterprise-like solutions that include features beyond plain-text searching (but I appreciate regex support).
Just as there is "locate" to "find". Is there a database for a faster "grep"?
mlocate, by default, only shows files that the user has access permission to. At least on my CentOS 7 build: For example: % rpm -qf /usr/bin/locate mlocate-0.26-8.el7.x86_64% locate /root/.ssh% sudo locate /root/.ssh /root/.ssh /root/.ssh/authorized_keys /root/.ssh/known_hostsThis works because locate is setgid and the data file is locked down to that group: % ls -l /usr/bin/locate -rwx--s--x 1 root slocate 40520 Apr 10 2018 /usr/bin/locate% sudo ls -al /var/lib/mlocate total 142820 drwxr-x--- 2 root slocate 4096 Feb 23 03:38 . drwxr-xr-x 45 root root 4096 Dec 6 2018 .. -rw-r----- 1 root slocate 146233302 Feb 23 03:38 mlocate.dbAnd, indeed, a normal user can't even locate the db file :-) % locate '/var/*mlocate*' /var/lib/mlocate% sudo locate '/var/*mlocate*' /var/lib/mlocate /var/lib/mlocate/mlocate.db /var/lib/mlocate/mlocate.db.khYLWGThe setgid option may work on FreeBSD as well.
This may come across as a silly question because each file and directory on a system can and will have permissions to block an ordinary user from seeing various files- but for added security sake and a context too long to get into: Can mlocate have multiple databases for different groups of users? Or every user for that matter? The goal is to limit knowledge or access to files that ordinary users shouldn't even know exist- but the root and sudo accounts should still be able to see everything in the system using a root mlocate db. I've previously simply restricted read access to the mlocate database, but this is not an option on the current system. If there's a method to thread mlocate for multiple installations, I'd only require two and there's no storage constraints just as an FYI. Thanks,
Different mlocate database for each user?
That very much depends on your implementation of locate. That's not a standard command and there are a few different implementations with quite significant differences.There's one implementation in GNU findutils. With that one: locate -i word1 word2locates files whose path contains either word1 or word2 case insensitively while locate -Ai word1 word2locates files whose path contains both. It also supports a --regex and --regextype option like for GNU find. By defaut, that's emacs-style regexps, some form of hybrid between BRE and ERE. With that one, you could do: locate -ir 'word1.*word2\|word2.*word1'The mlocate implementation (the default on Debian and derivatives) also support -A. It has -r/--regex, but not --regextype and its REs are Basic Regular Expressions. On systems like GNU ones whose BREs support \| for alternation as an extension, you can also do: locate -ir 'word1.*word2\|word2.*word1'ast-open has a locate as well as a ksh93 wrapper script around tw (the once to be successor of find). It doesn't support -A nor -r, but you can use the full power of ksh93 wildcards, so you can use for instance perl-like look-ahead operators with: locate '~(Pi:^(?=.*word1)(?=.*word2))'Or ksh93's & glob operator: locate -i '*word1*&*word2*'It's particularly slow compared to the other ones though as the pattern is not anchored. It's better once anchoring (left and right) is restored with: locate -i '~(lr)*word1*&*word2*'One problem with piping to grep is that it doesn't work for file path that contain newline characters. With GNU locate or mlocate, you can use the -0 option though to use NUL-delimited records which you can use in combination with the -z option of GNU grep: locate -i0 word1 | grep -z word2 | grep -z word3 | tr '\0' '\n'Or -v RS='\0' in GNU gawk or @ThomasDickey's mawk: locate -i0 word1 | awk -v RS='\0' '/word2/ && /word3/'Or perl -ln0: locate -i0 word1 | perl -ln0e 'print if /word2/ && /word3/'
I often use the following pipeline of locate (from findutils) and grep to find files whose pathnames contain two words word1 and word2, without any specific order between each other: locate -i word1 | grep -i word2I was wondering how to do that with a single non-pipeline command alone? Is it a better way than my pipeline command? Does locate support some regex in which we can formulate my search pattern? Thanks. Solution with find is https://unix.stackexchange.com/a/448006/674
Improve search for files by pathnames with locate and grep pipeline
Trying to learn about this, I have found about the basics of Synapse's operation, which can be presented here as an answer.Not only Synapse launcher has a lot of plugins that enhance its operation, but it is entirely based on plugins. Disabling all of them makes it useless: even Application Search is a plugin.When just typing in Synapse, file search is done through Zeitgeist plugin, which provides search within the Zeitgeist logs. These are event logs, not files logs. More here. For a file to be found in this way, it has to have been already accessed in some way. Synapse cannot and is not intended to search for any file by simply typing part or all of its name. That can be done through the locate search, which is based on a specific plugin intended to run that command (by selecting the last entry in the list of the simple Synapse search β€” which is the only entry when nothing is found). The locate search is made within databases prepared by updatedb. The sudo updatedb command is needed to update the data base. Once found by locate, if files are accessed/opened, they can be then found by simple Synapse search . To be found by locate a file needs (1) to be on a partition that is not excluded through the settings in /etc/updatedb.conf, and (2) to have been created before sudo updatedb was run. Files created on the desktop are immediately found by Synapse. Folder search is based on a separate plugin. After a file was opened and added to Zeitgeist, thus available with a simple search (without locate), other similar files will be found in the same way (e.g. with the same extension, within the same folder); that is due to other plugins: "Hybrid Search" and "Related files". More here and here.The answer to the above question is that normal Synapse file search (just typing in Synapse) uses different methods and tools than the search made with the locate command (selecting last entry after simple search and pressing ENTER). Normal search by just typing involves a tool (Zeitgeist) that only logs events, and thus finds only names of files already accessed (supplementary results are given because of the other plugins mentioned above). The search with locate is applied to all files listed when sudo updatedb was run last. Thus, it is the only way of finding files in Synapse that haven't been previously accessed and are not related to such files.
I love Synapse in Xfce and want to know more about improving its use. I use it to restart, logout or shutdown, launch applications, access files and folders. I have been trying to fix a few problems that affected file access on my second drive, as it involved opening files with executable permission and searching on a NTFS partition. I'm not sure the second problem is fixed: all the files I have searched for until now on the second partition are accessed through locate command β€” that is, typing the filename shows nothing and I have to press ENTER for locate to run and find them. After being found and opened in this way I would expect them to be found directly (without locate) the next time, but that is not the case. Such files are not even shown in the recent files (opening Synapse and pressing DOWN-ARROW; files appear in the recent files list if accessed from the file manager instead of Synapse). On the other hand, at least some files and folders from $HOME are shown directly in Synapse without the need for locate to find them. What triggers the difference between these and the rest? I guess Zeitgeist is involved in all normal Synapse search (the one that doesn't involve locate) and the fact Synapse is just showing me the $HOME files is because the problem of Synapse not searching NTFS partition (linked above) is not solved yet! I'm not sure I understand how locate plugin is supposed to work? Is Zeitgeist needing it in some cases or not, or are they completely separate processes?
Synapse launcher: what is the difference between the `locate` command and the simple search?
There are several implementations of locate, and the ones I'm aware of want either POSIX extended regexps, or POSIX basic regexps. Neither support lookaheads.
Problem I tried to implement a negative lookahead when searching a file with locate like this: locate --regex "apple(?!t)"However I'm getting the following error, because there seems to be some substitution going on:locate: invalid regexp `apple(?touch latex_preamble.tex)': Invalid preceding regular expressionHow can I make this work? I tried apple(?!t) and apple(?!t) as well. NOTE: I'm aware I can do this with: locate apple | grep -v appletbut I would like to know how I can get the regular expression to work.
Locate --regex with negative lookahead
The tool you want is lsof, which stands for list open files. It has a lot of options, so check the man page, but if you want to see all open files under a directory: lsof +D /pathThat will recurse through the filesystem under /path, so beware doing it on large directory trees. Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.
I tried to rm -rf a folder, and got "device or resource busy". In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one. Although they're useful, I'm currently interested in just ASimpleMethodThatWorksβ„’)
How to get over "device or resource busy"?