output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
What? no /bin/ is not a symlink to /usr/bin on any FHS compliant system. Note that there are still popular Unices and Linuxes that ignore this - for example, /bin and /sbin are symlinked to /usr/bin on Arch Linux (the reasoning being that you don't need /bin for rescue/single-user-mode, since you'd just boot a live CD).
/bin
contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). It may also contain commands which are used indirectly by scripts
/usr/bin/
This is the primary directory of executable commands on the system.
essentially, /bin contains executables which are required by the system for emergency repairs, booting, and single user mode. /usr/bin contains any binaries that aren't required.
I will note, that they can be on separate disks/partitions, /bin must be on the same disk as /. /usr/bin can be on another disk - although note that this configuration has been kind of broken for a while (this is why e.g. systemd warns about this configuration on boot).
For full correctness, some unices may ignore FHS, as I believe it is only a Linux Standard, I'm not aware that it has yet been included in SUS, Posix or any other UNIX standard, though it should be IMHO. It is a part of the LSB standard though.
|
I read this up on this website and it doesn't make sense.
http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/basic/node32.htmlWhen UNIX was first written, /bin and
/usr/bin physically resided on two
different disks: /bin being on a
smaller faster (more expensive) disk,
and /usr/bin on a bigger slower disk.
Now, /bin is a symbolic link to
/usr/bin: they are essentially the
same directory.But when you ls the /bin folder, it has far less content than the /usr/bin folder (at least on my running system).
So can someone please explain the difference?
| Difference between /bin and /usr/bin |
The directories /tmp and /usr/tmp (later /var/tmp) used to be the dumping ground for everything and everybody. The only protection mechanism for files in these directories is the sticky bit which restricts deletion or renaming of files there to their owners. As marcelm pointed out in a comment, there's in principle nothing that prevents someone to create files with names that are used by services (such as nginx.pid or sshd.pid). (In practice, the startup scripts could remove such bogus files first, though.)
/run was established for non-persistent runtime data of long lived services such as locks, sockets, pid files and the like. Since it is not writable for the public, it shields service runtime data from the mess in /tmp and jobs that clean up there. Indeed: Two distributions that I run (no pun intended) have permissions 755 on /run, while /tmp and /var/tmp (and /dev/shm for that matter) have permissions 1777.
|
According to FHS-3.0, /tmp is for temporary files and /run is for run-time variable data. Data in /run must be deleted at next boot, which is not required for /tmp, but still programs must not assume that the data in /tmp will be available at the next program start. All this seems quite similar to me.
So, what is the difference between the two? By which criterion should a program decide whether to put temporary data into /tmp or into /run?
According to the FHS:Programs may have a subdirectory of /run; this is encouraged for
programs that use more than one run-time file.This indicates that the distinction between "system programs" and "ordinary programs" is not a criterion, neither is the lifetime of the program (like, long-running vs. short-running process).
Although the following rationale is not given in the FHS, /run was introduced to overcome the problem that /var was mounted too late such that dirty tricks were needed to make /var/run available early enough. However, now with /run being introduced, and given its description in the FHS, there does not seem to be a clear reason to have both /run and /tmp.
| What's the difference between /tmp and /run? |
By default, the owner and group of /usr/local and all subdirectories (including bin) should be root.root and the permissions should be rwxr-xr-x. This means that users of the system can read and execute in (and from) this directory structure, but cannot create or edit files there. Only the root account (or an administrator using sudo) should be able to create and edit files in this location. Even though there is only one user on the system, it's generally a bad idea to change permissions of this directory structure to writable to any user other than root.
I would suggest placing your script/binary/executable into /usr/local/bin using the root account. It's a good habit to get into. You could also place the script/binary/executable into $HOME/bin and make sure $HOME/bin is in your $PATH.
See this question for more discussion:
Where should a local executable be placed?
|
From what I understand, the right place to put your own scripts is /usr/local/bin (for instance a script I use to back up some files). I notice that this folder is currently (by default) owned by root, and my normal user has no access to it. I am the only user on this computer. Shall I change this whole folder to my own user? Or is there another proper way to arrange permissions of /usr/local/bin?
| Permissions/ownership of /usr/local/bin |
What cd am I using?
If you're in Bash cd is a builtin. The type command even bears this out:
$ type -a cd
cd is a shell builtin
cd is /usr/bin/cd
cd is /bin/cdThe system will use the first thing in this list, so the builtin will be the preferred option, and the only one that works (see the section below on What is /bin/cd).
What's a builtin?
I like to think of builtins as functions that Bash knows how to do itself. Basically anything that you use a lot has been moved into the Bash "kernel" so that it doesn't have to go executing a process for each time.
You can always explicitly tell Bash that you want a builtin by using the builtin command like so:
$ builtin cdSee the help about builtin:
$ help builtinWhy isn't cd in hash?
The hash is meant only to "hash" (aka. "save" in a key/value pair) the locations of files, not for builtins or keywords. The primary task for hash is in saving on having to go through the $PATH each time looking for frequently used executables.
Keywords?
These are typically the commands that are part of Bash's programming language features.
$ type while
while is a shell keyword
$ type for
for is a shell keyword
$ type !
! is a shell keywordSome things are implemented in multiple ways, such as [:
$ type -a [
[ is a shell builtin
[ is /usr/bin/[
[ is /bin/[ ...and cd as you've discovered.
What is /bin/cd?
On my Fedora 19 system /bin/cd is actually a shell script:
$ more /bin/cd
#!/bin/sh
builtin cd "$@"But it doesn't do what you think. See these other U&L Q&A's for more details:What is the point of the `cd` external command?
"Why can't I redirect a path name output from one command to "cd"?Bottom line:
POSIX's requires that it's there and in this implementation, it acts as a test, confirming that you're able to change directories to X, but then returning a return code confirming or denying that this is possible.
| In a bash sub shell I get the following error when running cd
sudo: cd: command not foundThis is expected because I don't have a path. Usually to work around this I just provide the full path like so: (/usr/local/bin/foo)
Much to my surprise, cd does not appear to be in any of the normal places.
which cd
whereis cd
ls /bin | grep cdBy comparison, ls is right where I would expect.
which ls
/bin/lsWhere is the cd command located? And why is different from all the other commands?
Update
Another interesting tidbit, cd does not show up in hash
hash
0 /bin/ls
2 /usr/bin/find
2 /sbin/ip
1 /usr/bin/updatedb
1 /usr/bin/apt-get | where is `cd` located? [duplicate] |
According to Linux FHS, /usr is the location where Distribution-based items are placed and /usr/local is the location where you'd place your own localized changes (/usr/local will be empty after a base install). So, for example, if you wanted to recompile an Ubuntu package from source, their package manager would place the source for package in /usr/src/{package dir}. If you downloaded a program not managed by your distribution and wanted to compile/install it, FHS dictates that you do that in /usr/local/src.
EDIT: Short answer, yes, put your code in /usr/local/src.
|
I wanted to put my work files (code) in /usr/local/src, but I think it's already a folder that has some other semantic meaning.
What is that? Should I put source code there, or is there a better place?
Edit - I am the sole user and admin of the machine, and I don't want to use my home directory because it's on an NFS drive.
| What is the "/usr/local/src" folder meant for? |
First, an up-front conflict-of-interest disclaimer: I am a long-time GoboLinux developer.
Second, an up-front claim of domain expertise: I am a long-time GoboLinux developer.
There are a few different structures in current use. GoboLinux has one, and tools like GNU Stow, Homebrew, etc, use something quite similar (primarily for user programs). NixOS also uses a non-standard hierarchy for programs, and philosophy of life. It's also a reasonably common LFS experiment.
I'm going to describe all of those, and then comment from experience on how that works out in practice ("feasibility"). The short answer is that yes, it's feasible, but you have to really want it.GoboLinux
GoboLinux has a structure very similar to what you describe. Software is installed under /Programs: /Programs/ZSH/5.0.8 contains all the files belonging to ZSH 5.0.8, in the usual bin/lib/... directories. The system tools create symlinks to those files under a /System/Links hierarchy, which maps onto /usr¹. The PATH variable contains only the single unified executable directory, and LD_LIBRARY_PATH is unused. Multiple versions of software can coexist at once, but only one file by a given name (bin/zsh) will be linked actively at once. You can access the others by their full paths.
A set of compatibility symlinks also exists, so /bin and /usr/bin map to the unified executables directory, and so on. This makes life easier for software at run time. A kernel patch, GoboHide, allows those compatibility symlinks to be hidden from file listings (but still traversable).
Contra another answer, you do not need to modify kernel code: GoboHide is purely cosmetic, and the kernel does not depend on user-space paths in general². GoboLinux does have a bespoke init system, but that is also not required to do this.
The tagline has always been "the filesystem is the package manager", but there are reasonably ordinary package-management tools in the system. You can do everything using cp, rm, and ln, though.
If you want to use GoboLinux, you are very welcome. I will note, though, that it's a small development team, and you're likely to find that some software you want isn't packaged up if nobody has wanted to use it before. The good news is that it's generally fairly easy to build a program for the system (a standard "recipe" is about three lines long); the bad news is that sometimes it's unpleasantly complicated, which I'll cover more below.
Publications
There are a few "publications". I gave a presentation at linux.conf.au 2010 on the system as a whole that covers everything generally, which is available in video: ogv mp4 (also on your local Linux Australia mirror); I also wrote up my notes into prose. There are also a few older documents, including the famous "I am not clueless", on the GoboLinux website, which addresses some objections and issues. I think that we're all a bit less gung-ho these days, and I suspect that a future release will adopt /usr as the base location for the symlinks.NixOS
NixOS puts each installed program into its own directory under /nix/store. Those directories are named something like /nix/store/5rnfzla9kcx4mj5zdc7nlnv8na1najvg-firefox-3.5.4/ — there is a cryptographic hash representing the whole set of dependencies and configuration leading to that program. Inside that directory are all the associated files, with more-or-less normal locations locally.
It also allows you to have multiple versions around at once, and to use any of them. NixOS has a whole philosophy associated with it of reproducible configuration: it's essentially got a configuration management system baked into it from the start. It relies on some environmental manipulation to present the right world of installed programs to the user.LFS
It's fairly straightforward to go through Linux From Scratch and set up exactly the hierarchy you want: just make the directories and configure everything to install in the right place. I've done it a few times in building GoboLinux experiments, and it's not substantially harder than plain LFS. You do need to make the compatibility symlinks in that case; otherwise it is substantially harder, but careful use of union mounts could probably avoid that if you really wanted.
I feel like there was an LFS Hint about exactly that at one point, but I can't seem to find it now.On Feasibility
The thing about the FHS is that it's a standard, it's very common, and it broadly reflects the existing usage at the time it was written. Most users will never be on a system that doesn't follow that layout in essence. The result of that is that lots of software has latent dependencies on it that nobody realises, often completely unintentionally.
All those scripts with #!/bin/bash? No good if you don't have Bash there. That is why GoboLinux has all those compatibility symlinks; it's just practical. A lot of software fails to function either at build time or at run time under a non-standard layout, and then it requires patching to correct, often quite intrusively.
Your basic Autoconf program will usually happily install itself wherever you tell it, and it's fairly easy to automate the process of passing in the correct --prefix. Other build systems aren't always so nice, either by intentionally baking in the hierarchy, or by leading authors to write non-portable configuration. CMake is a major offender in the latter category. That means that if you want to live in this world you have to be prepared to do a lot of fiddly work up front in other people's build systems. It is a real hassle to have to dynamically patch generated files during compilation.
Runtime is another matter again. Many programs have assumptions about where their own files, or someone else's files, are found either relative to them or absolutely. When you start using symlinks to present a consistent view, lots of programs have bugs handling them (or sometimes, arguably correct behaviour that is unhelpful to you). For example, a tool foobar may expect to find the baz executable next to it, or in ../sbin. Depending on whether it reads its symlink or not, those can be two different places, and neither of them may be correct anyway.
A combined problem is the /usr/share directory. It's for shared files, of course, but when you put every program in its own prefix they're no longer actually shared. That leads to programs unable to find standard icons and the like. GoboLinux dealt with this in a pretty ugly way: at build time, $prefix/share was a symlink to $prefix/Shared, and after building the link was pointed to the global share directory instead. It now uses compile-time sandboxing and file movement to deal with share (and the other directories), but runtime errors from reading links can still be an issue.
Suites of multiple programs are another problem. GoboLinux has never gotten GNOME working fully, and I don't believe NixOS has either, because the layout interdependencies are so baked in that it's just intractable to cure them all.
So, yes, it's feasible, but:There is quite a lot of work involved in just making things function.
Some software may just never work.
People will look at you funny.All of those may or may not be a problem for you.¹ Version 14.01 uses /System/Index, which maps directly onto /usr. I suspect a future version may drop the Links/Index hierarchy and use /usr across the board.
² It does require /bin/sh to exist by default.
|
I'm a long time Linux user for over 15 years but one thing I hate with a passion is the mandated directory structure. I don't like that /usr/bin is the dumping ground for binaries or libs in /usr/lib, /usr/lib32, /usr/libx32, /lib, /lib32 etc... Random stuff in /usr/share etc. It's dumb and confusing. But some like it and tastes differ.
I want a directory structure where each package is isolated. Imagine instead if the media player dragon had it's own structure:
/software/dragon
/software/dragon/bin/x86/dragon
/software/dragon/doc/README
/software/dragon/doc/copyright
/software/dragon/lib/x86/libdragon.soOr:
/software/zlib/include/zlib.h
/software/zlib/lib/1.2.8/x86/libz.so
/software/zlib/lib/1.2.8/x64/libz.so
/software/zlib/doc/examples/...
/software/zlib/man/...You get the point. What are my options? Is there any Linux distro that uses something like my scheme? Can some distro be modified to work like I want it (Gentoo??) or do I need LFS? Is there any prior art in this area? Like publications on if the scheme is feasible or unfeasible?
Not looking for OS X. :) But OS X-inspired is totally ok.
Edit: I have no idea how PATH, LD_LIBRARY_PATH and other environment variables that depend on a small set of paths should work out. I'm thinking that if I have the KDE editor Kate installed in /software/kate/bin/x86/bin/kate then I'm ok with having to type the full path to the binary to start it. How it should work for dynamic libraries and dlopen calls, I don't know but it can't be an unsolvable engineering problem.
| What are the alternatives to the FHS? |
/usr/local is usually for applications built from source. i.e. I install most of my packages using something like apt, but if I download a newer version of something or a piece of software not part of my distribution, I would build it from source and put everything into the `/usr/local' hierarchy.
This allows for separation from the rest of the distribution.
If you're developing a piece of software for others, you should design it so that it can be installed anywhere people want, but it should default to the regular FHS specified system directories when they specify the prefix to be /usr (/etc, /usr/bin, etc.)
i.e. /usr/local is for your personal use, it shouldn't be the only place to install your software.
Have a good read of the FHS, and use the standard Linux tools to allow your source to be built and installed anywhere so that package builders for the various distributions can configure them as required for their distribution, and users can put it into /usr/local if they desire or the regular system directories if they wish.
|
I am developing a daemon that needs to store lots of application data, and I noticed that on my system (Fedora 15), there is a /usr/local/etc directory.
I've decided to install my daemon to /usr/local/bin, and I need a place for my config files.
I didn't see this on Wikipedia. Is this non-standard or is this in fact the standard place for programs installed to /usr/local/bin to store config files?
Reason being, I want to market this to sys-admins, and getting something like this wrong is not a great selling-point...
| What is the difference between /etc and /usr/local/etc |
In the Debian Policy is written that Debian follows the File Hierarchy Standard version 2.3. Note #19 on the standard says:Deciding what things go into "sbin" directories is simple: if a
normal (not a system administrator) user will ever run it directly,
then it must be placed in one of the "bin" directories. Ordinary users
should not have to place any of the sbin directories in their path.
For example, files such as chfn which users only occasionally use must
still be placed in /usr/bin. ping, although it is absolutely necessary
for root (network recovery and diagnosis) is often used by users and
must live in /bin for that reason.
We recommend that users have read and execute permission for
everything in /sbin except, perhaps, certain setuid and setgid
programs. The division between /bin and /sbin was not created for
security reasons or to prevent users from seeing the operating system,
but to provide a good partition between binaries that everyone uses
and ones that are primarily used for administration tasks. There is no
inherent security advantage in making /sbin off-limits for users.Short answer:Is there any reason I should not add /usr/local/sbin:/usr/sbin:/sbin to my path on Debian?As the note states, there is no reason why you should not do that. Since you're the only one using the system and you need the binaries in the sbin directories, feel free to add them to your $PATH. At this point let me guide you to an excellent answer how to do that correctly.
|
Compare Debian (left) and Ubuntu (right):
$ ifconfig $ ifconfig
bash: ifconfig: command not found eth0 Link encap ...
$ which ifconfig $ which ifconfig
$ /sbin/ifconfigThen as superuser:
# ifconfig # ifconfig
eth0 Link encap ... eth0 Link encap ...
# which ifconfig # which ifconfig
/sbin/ifconfig /sbin/ifconfigFurthermore:
# ls -l /sbin/ifconfig # ls -l /sbin/ifconfig
-rwxr-xr-x 1 root root 68360 ... -rwxr-xr-x 1 root root 68040 ...It seems to me the only reason I cannot run ifconfig without superpowers on Debian is that it's not in my path. When I use /sbin/ifconfig it does work.
Is there any reason I should not add /usr/local/sbin:/usr/sbin:/sbin to my path on Debian? This is a personal computer, I am the only human user.Versions used (uname -a):
Ubuntu:
Linux ubuntu 3.13.0-51-generic #84-Ubuntu SMP Wed Apr 15 12:08:34 UTC 2015 x86_64 x86_64 x86_64 GNU/LinuxDebian:
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux | Is there a reason I would not add /usr/local/sbin, /usr/sbin, /sbin to my path on Debian? |
This change was introduced by BSD after 1985 (BSD 4.2 was still documenting /usr) and in or before 1988 (BSD 4.3/SunOS 4.1 hier(7) manual page already documents /home). It was quickly followed by Solaris 2.0 (which kind of merged System V and BSD) and was later adopted by most other Unix vendors.
This is from the Solaris 2.0 useradd manual page: -D Display the default values for group, basedir, skel, shell,
inactive, and expire. When used with the -g, -b, -f, or -e
options, the -D option sets the default values for the
specified fields. The default values are: group other (GID of 1)
basedir /home
skel /etc/skel
shell /sbin/sh
inactive 0
expire Null (unset).Before that, older Unixes were using either the traditional /usr directory or some variants like /user1 documented in SVR3 and SVR4.0. Unix version 7 hier(7) manual page defines /usr as the default location for user's home directory:
/usr/wd/ initial working directory of a user, typically wd is the
user's login nameUnix version 6, the first Unix to be widely released outside of the Bell Labs had not the hier manual page yet but was already using and documenting /usr.
There are several reasons that explain the move from /usr to something else, including:With some Unix versions, upgrading the OS was blowing away the /usr directory.
Usernames like tmp, src, bin, local and the likes were forbidden as they clashed with existing directories under /usr.
Using /usr as an automounter base directory was not possible as it was not empty (Thanks to Johan for pointing this)
Diskless machines were expected to use a read only NFS share for /usr but read-write home directories |
Originally in Unix, /usr was used for user (home) directories. So if I had a user named alex, my home directory would be /usr/alex. (Interestingly, Plan 9, the successor to Unix, still has user directories in /usr.)
Nowadays, of course, we store home directories in /home. (At least on GNU/Linux. I don't know about other Unices, but OS X doesn't count.) At what point did this become standard practice? What Unix flavor did it appear in? How long did adoption by other Unices take? Has /home been adopted by everyone?
I've done some searching on here, but turned up nothing.
| At what point did the /home directory appear? |
FHS v2.3 was released ten years ago. Some things have changed since then (including the introduction of /run1). About three years ago, the Linux Foundation decided to update the standard and invited all interested parties to participate.
You can view the v. 3.0 drafts here and the section that describes /run here.
The distinction between /media and /mnt is pretty clear in the FHS (see Purpose and Rationale), so I won't go over it again. Same for the purpose of /run - see links.
The Gnome story is yet another thing. Gnome uses underneath an application called udisks (replaced later by udisks2) to automount drives/devices. For quite a long time, udisks default mounts were under /media. In 2012 the devs decide to move the mounts to /run/media (i.e. a private directory). So the different behaviour you're experiencing there is caused by the different versions of udisks that each DE is using.
1: see
What's this /run directory doing on my system and where does it come from ?
What is this new /run filesystem ?
|
In FHS-2.3, we have /media that holds mount points for removable media such as CD-ROMs and we have /mnt that holds temporarily mounted filesystems.
On the other hand, we have /run/media and /run/mount. For me, the CDs and USBs are mounted on /run/media.
I don't see any clear distinction between them(/media, /mnt, /run/mount) . What are their differences? I have seen similar trend (mount on /run/media) in fedora 20 - GNOME 3.10.4 and ubuntu 14.04.1 (installed on virtual box) with GNOME 3.10.4. But when I plugged in a USB flash (with auto-mounter script) on a system with Centos 6 and GNOME 2.28.2 it was mounted on /media
| what is the distinction between /media, /mnt and /run/mount? |
Linux distributions use the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html You can also try man hier.
I'll try to sum up answers your questions off the top of my head, but I strongly suggest that you read through the FHS:/bin is for non-superuser system binaries
/sbin is for superuser (root) system binaries
/usr/bin & /usr/sbin are for non-critical shared non-superuser or superuser binaries, respectively
/mnt is for temporarily mounting a partition
/media is for mounting many removable media at once
/dev contains your system device files; it's a long story :)
The /usr folder, and its subfolders, can be shared with other systems, so that they will have access to the same programs/files installed in one place. Since /usr is typically on a separate filesystem, it doesn't contain binaries that are necessary to bring the system online.
/root is separate because it may be necessary to bring the system online without mounting other directories which may be on separate partitions/hard drives/servers
Yes, /etc stands for "et cetera". Configuration files for the local system are stored there.
/opt is a place where you can install programs that you download/compile. That way you can keep them separate from the rest of the system, with all of the files in one place.
/proc contains information about the kernel and running processes
/var contains variable size files like logs, mail, webpages, etc.To access a system, you generally don't need /var, /opt, /usr, /home; some of potentially largest directories on a system.
One of my favorites, which some people don't use, is /srv. It's for data that is being hosted via services like http/ftp/samba. I've see /var used for this a lot, which isn't really its purpose.
|
Coming from the Windows world, I have found the majority of the folder directory names to be quite intuitive:\Program Files contains files used by programs (surprise!)
\Program Files (x86) contains files used by 32-bit programs on 64-bit OSes
\Users (formerly Documents and Settings) contains users' files, i.e. documents and settings\Users\USER\Application Data contains application-specific data
\Users\USER\Documents contains documents belonging to the user\Windows contains files that belong to the operation of Windows itself\Windows\Fonts stores font files (surprise!)
\Windows\Temp is a global temporary directoryet cetera. Even if I had no idea what these folders did, I could guess with good accuracy from their names.
Now I'm taking a good look at Linux, and getting quite confused about how to find my way around the file system.
For example:/bin contains binaries. But so do /sbin, /usr/bin, /usr/sbin, and probably more that I don't know about. Which is which?? What is the difference between them? If I want to make a binary and put it somewhere system-wide, where do I put it?
/media contains external media file systems. But so does /mnt. And neither of them contain anything on my system at the moment; everything seems to be in /dev. What's the difference? Where are the other partitions on my hard disk, like the C: and D: that were in Windows?
/home contains the user files and settings. That much is intuitive, but then, what is supposed to go into /usr? And how come /root is still separate, even though it's a user with files and settings?
/lib contains shared libraries, like DLLs. But so does /usr/lib. What's the difference?
What is /etc? Does it really stand for "et cetera", or something else? What kinds of files should go in there -- global or local? Is it a catch-all for things no one knew where to put, or is there a particular use case for it?
What are /opt, /proc, and /var? What do they stand for and what are they used for? I haven't seen anything like them in Windows*, and I just can't figure out what they might be for.If anyone can think of other standard places that might be good to know about, feel free to add it to the question; hopefully this can be a good reference for people like me, who are starting to get familiar with *nix systems.
*OK, that's a lie. I've seen similar things in WinObj, but obviously not on a regular basis. I still don't know what these do on Linux, though.
| Standard and/or common directories on Unix/Linux OSes |
/var/backups is specific to Debian. It is not specified in the FHS, and its use is not documented in Debian policy (See Debian Bug report logs - #122038). The behavior is described in http://ubuntuforums.org/showthread.php?t=1232703.
While I agree with @fpmurphy that there is little danger of Debian ever removing your backup files in /var/backup, I think that it is not good policy to use a directory that is so Debian-specific. For one, Debian might change its policy and break things. For another, the user community already has specific expectations about what the directory is for. And finally, because it is not "portable" in the sense that it is not clear where this directory would be in a non-Debian distribution.
If my understanding of the FHS is correct, it would be appropriate to put clones of Git repositories in /opt/<project_name>/.git or in /usr/local/src/<project_name/.git. My personal inclination would be to use the former because it leaves the door open to backup project resources that are not source files and therefore not in Git.
If you really want to emphasis the backup nature of these repositories, you could put them in /backups, or even /home/backups, two directory names that are often used as mount points for external storage.
|
There's a system-created /var/backups directory on Debian-based systems. I need a place to store backups of several git repositories (the primaries are on bitbucket). If I store them in /var/backup/git will that break apt-get, or will they get automatically deleted at inopportune times? Is there any reason I shouldn't use /var/backup? If there is, what is a reasonable alternative?
| Is it bad/dangerous/inappropriate to put arbitrary backups in /var/backups? |
Short summary of the page suggested by don_crissti:
Scattering utilities over different directories is no longer necessary and storing them all in /usr/bin simplifies the file system hierarchy. Also, the change makes Unix and Linux scripts / programmes more compatible.
Historically the utilities in the /bin and /sbin directories were used to mount the usr partition. This job is nowadays done by initramfs, and splitting the directories therefore no longer serves any purpose. The simplified file system hierarchy means, for instance, that distributions no longer need to fix paths to binaries (as they're now all in /usr/bin).
|
According to the Filesystem Hierarchy Standard the /bin directory should contain utilities needed in single user mode. In practice, many Linux distributions make the directory a symbolic link to /usr/bin. Similarly, /sbin is nowadays often a symbolic link to /usr/bin as well.
What's the rationale behind the symlinks?
| Why is /bin a symbolic link to /usr/bin? |
Your understanding is mostly correct, but there are a couple of extra things to consider:'binary' refers to something that isn't human readable. This usually refers to machine code, but many other files are also binary files in this sense, with most multimedia formats being a good example. The FHS however has a more specific usage for the term.
Libraries can be binary code. In fact, most of the stuff in /lib is going to be libraries compiled to machine code.
While things like cat are used in shell script like calls to code in libraries, they are not libraries in the FHS sense because they can be run by themselves.As a result of these points, the more common terminology among people who aren't writing standards documents is:Object files: These are natively compiled machine code, but may not even run or be callable. They typically have a .o extension unless they fall into one of the other categories, and are almost never seen on most systems except when building software. I've listed them here because they're important to understanding a few things below.
Executable files: These are files consisting of mostly self contained code that can be run directly. They may be either specially formatted object files which can be loaded directly by the kernel (things like cat, bash, and python are all this type of executable), or are interpreted by some intermediary program that is itself an executable (Minecraft, pydoc, and cowsay are all examples of this type of executable). Executables of the first type almost never have a file extension on UNIX systems, while executables of the second type may or may not. This is what the FHS refers to as 'binaries'. They can be run from other executables, but require calling special functions to invoke them (fork() and exec() in C and C++, things out of the subprocess module in Python, etc) and run as a separate process.
Libraries: These are files that contain reusable code that can be invoked by another library or an executable. Code in libraries is invoked (mostly) directly by other code once the library is loaded (referred to as 'linking' when talking about compiled code), and runs in the same process as the code calling it. There are three generic types of libraries:Static libraries: These are the original variety. They consist of an archive file (usually AR format) with a large number of object files inside, one for each function in the library. The object files get linked into the executable that uses them, so an executable that uses just static libraries is essentially 100% independent of any other code. On UNIX systems, they typically have a .a extension. The concept of static libraries doesn't really exist outside of compiled programming languages.
Dynamic libraries: These are the most common type of library used today. A dynamic library is a special object file, typically with a .so extension on UNIX (.dll is the standard on Windows), that gets loaded at run time by executables that use it. Most of what you'll find in /lib on production systems is dynamic libraries.
Modules: This is the equivalent of a dynamic library for an interpreted language. Handling is a little bit different than for a compiled language, and unlike with a compiled language, it's possible for a file to be both a module and an executable (see http.server in the Python standard library for an example). |
I'm trying to understand the Filesystem Hierarchy Standard. I have looked up both binaries and libraries, and as I currently understand it:
binaries are files of computer-readable code in binary format, that control the CPU and processor directly with bits.
libraries are functions usable by various programs, for convenience sake - like when you require a module in Javascript of PHP.
Is this understanding correct? If it is, why do we still separate libraries and binaries? Some libraries are binaries, right? And some binaries (cat, less, date, rm, cp, etc) are used and reused as though they were libraries... Can someone help explain the difference and help me find better definitions for these two words? Thank you.
| What's the difference between a binary file and a library? |
There is a difference between /opt and /usr/local/bin. So just symlinking binaries from one to another would be confusing. I would not mix them up.
/opt is for the installation of add-on application software packages, whereas the /usr/local directory is for the system administrator when installing software locally (with make and make install). /usr/local/bin is intended for binaries from software installed under /usr/local.
According to the File Hierarchy Standard, the correct way would be to add /opt/<package>/bin to the $PATH for each individual package. If this is too painful (when you have an uncountable number of /opt/<package>/bin direcories for example) then you (the local administrator) can create symlinks from /opt/<package>/bin to the /opt/bin directory. This can then be added to the users $PATH once.
|
Can programs installed under /opt be safely symlinked into /usr/local/bin, which is already in the PATH by default in Ubuntu and other Linux distros?
Alternatively, is there some reason to create a separate /opt/bin and add that to the PATH, as in this answer: Difference between /opt/bin and /opt/X/bin directories?
| How should executables installed under /opt be added to the path? |
You make your own mount point directories. If you want to ask why, I can only point to the great answer by Wouter Verhelst.
Internal drives
/mnt is a valid place to make your own if you like, and so is /.
/mnt may have been used for this purpose by some historical installation systems, as well as for removable media (before /media). It's still valid for you to do so, but the system itself is no longer supposed to set up anything in /mnt.
I think it's reasonable to use /mnt if you might make multiple mount points. It makes it easy to see all of them together, and it's known as one of the locations people like to use. Some other people like to use /Volumes - following the OS X system, or /vol. /data is common for a single mount point. /d/ is also used. /disk/ is almost certainly used by some, but may be distracting for storage which is not disk-based.
If you use /mnt, I would also create /mnt/tmp. Then there will still be a convenient directory for temporary mounts, the original use of /mnt which FHS mentions.
Preferred mount points for internal HDDs
It's possible that manually creating mount points under /media is a bad idea on some common systems. Modern Linux OS's will create mount points for removable media automatically, and it's possible the structure they create would conflict, or simply appear inconsistent with your own. You don't say what your system is, but you may be interested in portable guidelines, especially if you're asking about FHS. Note this reasoning is similar to why the FHS says the OS must not populate /mnt.
Mount point for system-wide USB disk
Network filesystems
It is sometimes recommended to mount network filesystems in a dedicated sub-directory e.g. /n/host, /nfs/host or /net/host etc.
For example, if you mount a network filesystem at /host and the network becomes unreachable, ls / may hang when it tries to stat the network filesystem. This could be undesirable and frustrating, at a time when you are already becoming frustrated.
|
I'm wondering what the FHS compliant mount points for internal harddrives and networkshares are? Many different tutorials are suggesting to mount them in subdirectories to /mnt or /media
According to the FHS 3.0 (File Hierarchy Standard):/media : Mount point for removable media
(This directory contains subdirectories which are used as mount points for removable media such as floppy disks, cdroms and zip disks.)
/mnt : Mount point for a temporarily mounted filesystem (This directory is provided so that the system administrator may temporarily mount a filesystem as needed. The content of this directory is a local issue and should not affect the manner in which any program is run)I assume that those mount points could go to /home/foo/extdrive /home/foo/nfsshare for a single user system, but where would I mount them accessible for all users?
Update:
FHS 3.0, Chapter 3.1, second "Rationale" paragraphnew directory in / (ie /workspace and /nfsshare) There are several reasons why creating a new subdirectory of the root filesystem is prohibited:
It demands space on a root partition which the system administrator may want kept small and simple for either performance or security reasons.
It evades whatever discipline the system administrator may have set up for distributing standard file hierarchies across mountable volumes.
Distributions should not create new directories in the root hierarchy without extremely careful consideration of the consequences including for application portability. | What are the FHS compliant mount points? |
I would place them in /var/log/package_name; it satisfies the principle of least surprise better than /var/opt/package_name/log. I don't have a citation for this; it simply matches where I'd look for logs.
I might also forego writing my own log files, and instead log to syslog with an appropriate tag and facility; if I'm looking for clean integration with established analysis tools, I don't believe I can do better for a communications channel:Every generic tool with "log analysis" as a listed feature already watches syslog.
Log file release and rotation semantics are handled for me; I don't have to set up a mechanism for logrotate to tell me to let go of the file and open a new one. I don't even have to tell logrotate about new files to rotate!
Offloading logs to central logging servers is handled for me, if the site demands it; Existing established tools like rsyslog will be in use if needed, so I don't have to contemplate implementing that feature myself.
Access controls (POSIX and, e.g. SELinux) around the log files are already handled, so I don't need to pay as much attention to distribution-specific security semantics.Unless I'm doing some custom binary format for my log and even then, I prefer syslog-friendly machine-parseable text formats like JSON. I have a hard time justifying my own separate log files; analysis tools already watch syslog like a hawk.
|
I am installing a custom package to /opt/package_name, storing configuration files in /etc/opt/package_name and static files in /var/opt/package_name/static/ - all following the conventions suggested by FHS. [1] [2] [3]
I also need to store some logfiles. I want them to be discoverable by analysis tools, so they should also be in a conventional location. Should these go in:/var/log/package_name (like a system package, even though this is a custom package)
/var/opt/package_name/log (following the /var/opt convention - but is this discoverable?)
something else? | Where should an /opt package write logs? |
/dev/shm : It is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things.
/run/lock (formerly /var/lock) contains lock files, i.e. files indicating that a shared device or other system resource is in use and containing the identity of the process (PID) using it; this allows other processes to properly coordinate access to the shared device.
/tmp : is the location for temporary files as defined in the Filesystem Hierarchy Standard, which is followed by almost all Unix and Linux distributions. Since RAM is significantly faster than disk storage, you can use /dev/shm instead of /tmp for the performance boost, if your process is I/O intensive and extensively uses temporary files.
/run/user/$uid: is created by pam_systemd and used for storing files used by running processes for that user.Coming to your question, you can definitely use /run/lock directory to store your lock file.
|
I want to synchronize processes based on lock files (/ socket files). These files should only be removable by their creator user.
There are plenty of choices:
/dev/shm
/var/lock
/run/lock
/run/user/<UID>
/tmp
What's the best location for this purpose? And what way are above locations meant to be used for?
| Linux file hierarchy - what's the best location to store lockfiles? |
You should run
dpkg -S /lib/modules/*to check whether any installed package matches those directories. You can delete any directory for which the above says
dpkg-query: no path found matching pattern /lib/modules/...For directories still matching a package, you should remove the corresponding package first. If you’re using Ubuntu,
sudo apt autoremove --purgeshould take care of this for you, but do pay attention to the list of packages it shows before confirming the removal.
|
I saw that in /lib/modules/ I have 7 directories that related to the out of date kernel versions, can I fully delete them? It will not make any changes or hurt my system?
$ ls /lib/modules
5.4.0-26-generic 5.4.0-31-generic 5.4.0-37-generic 5.4.0-40-generic
5.4.0-29-generic 5.4.0-33-generic 5.4.0-39-generic 5.4.0-42-generic
$ uname -r
5.4.0-42-generic # remove all directories without this kernel directory | Can I remove all the recent kernel versions at /lib/modules/ |
The purpose of these files is to provide an easy means for other processes to communicate with them (e.g. send signals). This only makes sense for long running services, that’s why you find much less such files than processes running.
Usually those files are created by the service they represent, you will find a parameter like --pid-file or so in the invocation.
Depending on the type of init-system you will find files for services in different places.sysv-init: /etc/init.d/
upstart: /etc/init/
systemd: /etc/systemd/ |
I'm quite new in Linux world, and now I'm trying to understand FHS principles.
In /var/run I found about ten *.pid files like crond.pid which contain just PIDs.
There are more than ten processes running in the system and just ten files.
So what is their purpose and what generated them?
| What is the meaning/purpose of *.pid files in /var/run |
If you're using Linux, it's never "mandatory" to redirect to /dev/null instead of /dev/zero. As you've noticed, you'll get the same result either way.
That said, you should always redirect to /dev/null if you're discarding data. Because everyone understands that writing to /dev/null means throwing the data away; it's expressing your intention.
On the other hand, writing to /dev/zero also throws your data away, but it's not immediately obvious that that's what you're trying to do.
Besides that, I'd be concerned whether writes to /dev/zero are allowed on other Unices, like the BSDs etc. I don't think /dev/zero is even required by POSIX, while /dev/null is. So using /dev/null for its intended purpose is maximally portable; doing anything else is sacrificing portability for no gain.
|
According with the FHS about /dev at:6.1.3. /dev : Devices and special filesit contains:
The following devices must exist under /dev./dev/null
All data written to this device is discarded. A read from this device will return an EOF condition./dev/zero
This device is a source of zeroed out data. All data written to this device is discarded. A read from this device will return as many bytes containing the value zero as was requested.
...Observe that both have:
All data written to this device is discardedI read many tutorials where /dev/null is always used to discard data. But because both have the same purpose about writing (discard)
QuestionWhen is mandatory use /dev/zero over /dev/null for write/discard purpose?BTW for other differences - practically mostly about read - we have available:Difference between /dev/null and /dev/zero | When is mandatory use /dev/zero over /dev/null for write/discard purpose? |
My guess is that WebOS is designed to be installed on two different filesystems, a root filesystem that is read-only in normal operation and a filesystem mounted on /var that is read-write in normal operation. Since home directories need to be writable, they are placed somewhere under /var. This kind of setup is fairly common on unix systems that run off flash (such as PDAs¹ and embedded unices).
While /home is mentioned by the Filesystem Hierarchy Standard on Linux and is generally common amongst unices, it is not universal (the FHS lists it as “optional” and specifies that “no program should rely on this location”). Sites with a large number of users sometimes use /home/GROUP/USER or /home/SERVER/USER or /home/SERVER/GROUP/USER. And I've seen directories rooted in other places: /homes, /export/home, /users, /net, ... In fact, a long long time ago, the standard location for home directories was /usr.
¹ For example Android (not a unix, but running on a Linux kernel) has a read-only root filesystem and a writable filesystem on /data.
|
As far as I understand, the traditional place for home directories is beneath /home. Some Linux variants seem to keep them in /var/home, what's the reason for that?
| Why would I keep home directories in /var/home? |
According to the Wikipedia page the standard is for "Unix and Unix-like operating systems". While it may have grown out of a predominantly GNU/Linux environment, the intention seems to have consistently positioned it as focused on the broader *nix world.
The first version, originally bearing the catchy name, FSSTND, was published in 1994. The accompanying FAQ describes how it originated:The FSSTND is a consensus effort of many Linux activists; the main
arm of their discussion takes place on the FSSTND mailing list...
The FSSTND draws ideas from POSIX, 4.4BSD, SVR4, SunOS 4, MCC,
Slackware, SLS, (in no particular order) and many other systems. We
have not followed any one operating system layout in entirety.
Instead we have tried to take the best of each filesystem layout and
combine them into a homogenous whole, well suited to the needs of
Linux users everywhere.The Linux Foundation is currently working on the next version, FHS 3.0, and clearly indicated they see it as applying to the wider Unix ecosystem:The Filesystem Hierarchy Standard (FHS) is a reference describing the conventions used for the layout of a UNIX system. It has been made popular by its use in Linux distributions, but it is used by other UNIX variants as well.As to whether in practice the FHS is widely adopted: it is, but inconsistently.
|
The Filesystem Hierarchy Standard says where to put stuff in a UNIX distribution.
Is the FHS used/designed for use outside of GNU/Linux, or is it mostly limited to GNU/Linux?
| Is the Filesystem Hierarchy Standard a UNIX standard or a GNU/Linux standard? |
This splitting is pretty typical for most services. I'm on Fedora but most distributions do the same in terms of organizing files based on their type, into designated areas.
Taking a look at the Postgres SQL server:The configuration files go into /etc/
Executables go into /usr/bin
Libraries go into /usr/lib64/pgsql/
Locale information goes into /usr/share/locale/
Man pages and docs goes into /usr/share/
Data goes into /var/lib/The rational for having a libraries directory usr/lib/postgresql in your case, which is equivalent to /usr/lib64/pgsql/ for my install, is that applications can make use of libraries of functions that are provided by Postgres. These functions are contained in these libraries.
So as an application developer, you could link against the libraries here to incorporate function calls into Postgres, into your application. These libraries will often times include API documentation, and the developers of Postgres make sure to keep their API specified and working correctly through these libraries, so that applications that make use of them, can be guaranteed that they'll work correctly with this particular version of Postgres.
|
It appears that the Postgresql installation is split into three folder locations on Debian:Configuration: /etc/postgresql
Binaries: /usr/lib/postgresql
Data: /var/lib/postgresqlI understand the benefits of splitting up the configuration files and the data, however, the binaries location is confusing to me -- why wouldn't it simply be in /usr/bin?
More to the point, why would some binaries go into /usr/bin and others into /usr/lib?
| Logic behind Postgres binary installation path on Debian |
A locally installed package under /usr/local, or /opt per the FHS standard, means packages not installed by the default distribution, but packages installed specifically for that system.The directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib,
and /opt/man are reserved for local system administrator use. Packages
may provide "front-end" files intended to be placed in (by linking or
copying) these reserved directories by the local system administrator,
but must function normally in the absence of these reserved
directories.
Programs to be invoked by users must be located in the directory
/opt/<package>/bin or under the /opt/ hierarchy. If the
package includes UNIX manual pages, they must be located in
/opt/<package>/share/man or under the /opt/ hierarchy, and
the same substructure as /usr/share/man must be used.
Package files that are variable (change in normal operation) must be
installed in /var/opt. See the section on /var/opt for more
information.
Host-specific configuration files must be installed in /etc/opt. See
the section on /etc for more information.
No other package files may exist outside the /opt, /var/opt, and
/etc/opt hierarchies except for those package files that must reside
in specific locations within the filesystem tree in order to function
properly. For example, device lock files must be placed in /var/lock
and devices must be located in /dev.The packages in question can be installed either by the sysadmin, or given the appropriates rights, by other users.
Frequently, these applications are locally compiled or run as scripts, but there are alternative methods for deploying them, such as distributing pre-compiled binaries or packages to a defined set of servers. In cases where a system administrator is responsible for the installation, they can compile and package the application in adherence to the distribution standards, like using Debian's .deb package format. Additionally, I maintain local repositories for this purpose.
|
I've been scratching my head over the File System Hierarchy Standard recently and in numerous occasion, when talking about the /usr/local directory, I came across the term "locally installed packages". Could someone please explain what is exactly meant by "local" in this context?
| What is meant by "locally installed package" in the world of Unix? |
rpmlint is a tool to check RPMs against some sort of packaging policy. Its configuration is typically distribution dependant and it checks packages against the particular distribution policy. Checking your own packages is fine as long as this is what you want.
If your policy differs from the distribution policy, you either have to configure rpmlint accordingly, refrain from using it or ignore the specific errors.
The following should do the trick when added to /etc/rpmlint/config or ~/.config/rpmlint (not tested):
addFilter("E: dir-or-file-in-usr-local")Sources:http://forums.opensuse.org/showthread.php/450353-How-to-disable-RPMlint-error |
I'm building some rpm packages and checking for standards and style conformance using rpmlint. The packages are specific to systems at my place of work and they won't get pushed upstream. Our packages include a variety of software, including in-house software, patched versions of software from the repositories, and software not available from the official repositories. We install local packages into /usr/local for many reasons:avoids naming conflicts with official packages
prevents yum update from clobbering local packages
allows local packages to live on separate partition or disk and/or be shared over NFS so that packages and configurations can be shared among hosts
allows us to have greater control over packages that are installed from sources outside the official repository, many of which do not conform to the standard installation paths (bin, lib, include, share, etc.)However, rpmlint gets very complainy when run on a package that installs files to /usr/local. For instance, on a custom build of GNU Hello World, this is what rpmlint -i has to say:
hello.x86_64: E: dir-or-file-in-usr-local /usr/local/hello-2.8/bin/hello
A file in the package is located in /usr/local. It's not permitted for
packages to install files in this directory.I'm aware of the filesystem hierarchy standard, according to which:The original idea behind '/usr/local' was to have a separate ('local')
'/usr' directory on every machine besides '/usr', which might be just
mounted read-only from somewhere else. It copies the structure of
'/usr'. These days, '/usr/local' is widely regarded as a good place in
which to keep self-compiled or third-party programs. The /usr/local
hierarchy is for use by the system administrator when installing
software locally. It needs to be safe from being overwritten when the
system software is updated. It may be used for programs and data that
are shareable amongst a group of hosts, but not found in /usr. Locally
installed software must be placed within /usr/local rather than /usr
unless it is being installed to replace or upgrade software in /usr.We are, in fact, following these standards and installing our local software to /usr/local for just these reasons, so I don't see why there should be anything wrong with using a package manager to install packages to /usr/local. However, I'd also like our packages to be standards-compliant, if for no other reason than consistency among our local packages. So why does rpmlint throw an error for files in /usr/local? Shouldn't this be at the packager's descretion? Can I ignore this error or at least make rpmlint print a warning instead?
| Why is "dir-or-file-in-usr-local" an error rather than a warning? |
It has been suggested that /usr/local/lib should be on the default path and it should be considered a 'bug' in Linux variants like Red Hat where it isn't.
This answer https://stackoverflow.com/a/17653893
points out the salient parts of http://linuxmafia.com/faq/Admin/ld-lib-path.htmlMany Red Hat-derived distributions don't normally include /usr/local/lib in the file /etc/ld.so.conf. I consider this a bug, and adding /usr/local/lib to /etc/ld.so.conf is a common ``fix'' required to run many programs on Red Hat-derived systems.I raised this with Red Hat and I now disagree.
Red Hat provided packages are never installed to /usr/local on systems where vendors do install to /usr/local the answer is different. On those systems /usr/local/lib can reasonably be expected to be on the default search path.
Red Hat pointed out that /usr/local/lib should not be on the default search path as any library added there could be picked up by RPM and yum.
I investigated this further. If you install your own version of a system library in /usr/local/lib then it could satisfy a dependency of another system package you install normally via RPM or yum. Obviously this could affect system stability. Worse it would do so quite subtley. yum check could report that you have all the vendor versions of all the packages you need
and not notice you have you own version of something significant in /usr/local/lib.
On systems using a different package manager this may not apply.
I don't have a complete answer to what to put in your RPATH. However, I think you should avoid depending on libraries in /usr/local/lib and instead have them installed to /opt (i.e. somewhere you control as part of your installation) wherever practical.
| Under the FHS, system packages (e.g. RPMs) install libraries to /usr/lib (or /usr/lib64).
Similarly, libraries compiled using the old "configure;make;make install" routine, which are not part of the system distribution, by default get installed to /usr/local/lib (or /usr/local/lib64).
In general it is considered bad form to require users to alter LD_LIBRARY_PATH or ld.so.conf for applications they install.
See for example:
http://web.archive.org/web/20060719201954/http://www.visi.com/~barr/ldpath.html
However, shouldn't /usr/local/lib be an exception to this rule?
If that's the case why don't many/most distributions include /usr/local/lib on the library search path by default?
So far only ArchLinux seems to have considered this to be bug
see http://bugs.archlinux.org/task/20059?project=1&opened=2263
& the related http://bbs.archlinux.org/viewtopic.php?id=99807
Is it more correct for an application that needs a library in /usr/local/lib to include /usr/local/lib in its RPATH or to expect the OS to have that setting already? I dislike the idea of using anything not based on $ORIGIN in the RPATH.
This is not a question of pedantry as it has implications for system stability and how software should be packaged.
| Why isn't /usr/local/lib on the library path by default? [closed] |
You don't need to keep it. However, you may want to keep the package tarball itself for:
make uninstallGenerally source packages have this as a make target so that you can tidily remove the package from your system if desired. It should not depend on preserving the state of the build, so you can erase the directory and then later unpack the tarball and just do it.
Things from a git repo may be less consistent. You can check to see if the target exists with make --dry-run uninstall1. If so, tar or otherwise archive the directory yourself and stash it.
If you know you can get the same package in the same version anytime, you don't need to keep the tarball either. And of course, if you know what was installed and it is simple and straightforward (e.g. just an executable and a man page), this is not a big concern.1. Implying a way to deduce what's installed by make install ;)
|
If I'm installing from source, do I need to keep the extracted tarball directory? So if I download the git tarball. I then do:
tar -xvzf git.tar.gzThis will create a git.x.x. directory, into which I cd, then run ./configure etc. Once I'm done with this process, and git, or whatever is installed, do I need to keep the original git.x.x extracted directory, or is that just used for compiling the program?
I'm somewhat confused by all the directories, and folders used for programs.
| Installing from source - do I need to keep the extracted tarball directory |
The short answers are, yes, it was done for compatibility (lots of programs referenced /bin/sh and /bin/ed), and in the early days /bin and /usr/bin contained totally disjoint sets of files. /bin was on the root filesystem, a small disk that the computer's boot firmware had to be able to access, and held the more critical and often-used files. /usr/bin was on /usr, typically an entirely separate, larger disk. /usr, at first, also contained users' home directories. As /usr grew, we would periodically replace its drive with something larger. The system could run with no /usr mounted, even if wasn't all that useful.
/usr's disk (or disk partition) was mounted after the Unix kernel had been booted and the system was partway through the user-mode boot process (/etc/rc), so programs like sh and mount and fsck had to be in the root filesystem, generally in /bin and /etc. Sun had even rearranged / and /usr so that a shared copy of /usr could be mounted read-only across a network. /usr/tmp became a symlink to /var/tmp. /var was either on the root filesystem or, preferably, on another partition.
I believe it was Sun that decided, at one point, that it wasn't worth heroically trying to have a system be able to come up if its /usr was trashed. Most users either had / and /usr on the same physical disk - so if it died, both filesystems were toast - or had /usr mounted read-only from a server. So some critical programs used for system boot and maintenance were compiled statically and put in /sbin, but most of the programs in /bin were moved to /usr/bin and /bin became a symlink to /usr/bin.
System V prior to R4 didn't even have symlinks. Sun and AT&T worked to combine SunOS and SVR3, and that became SVR4 (and Solaris 2). It had /bin as a symlink to /usr/bin.
So when that web site says "On SysV Unix /bin traditionally has been a symlink to /usr/bin", they really should have said "On System V Release 4 and followons, ...".
|
This systemd wiki page about the /usr merge, under Myth #6, states that /bin has traditionally been a symlink to /usr/bin on System V UNIX.
What is the motivation for this? For backwards compatibility it makes sense, but I don't understand why it was like this in the early days. (Or, am I misunderstanding? Did early UNIX versions distinguish between /bin and /usr/bin, and System V changed that by merging them?)
| Why does System V traditionally symlink /bin to /usr/bin? |
One option is to use a distro with a merged /usr; then you can mount /usr RO and the rest RW, and have most of the relevant stuff RO. This doesn't catch /etc, though, which you might want. Not quite a solution, more a workaround.
Another is to make a single BTRFS volume with subvolumes for all the mounts you want, then mount with -osubvol=<whatever>. These mounts can have individual mount options, but in the default configuration (without any quota setup) they'll all count against the entire BTRFS volume space-wise, such that you'll be able to put new data anywhere you like as long as the whole FS has space left.
|
My objective is to have the physical storage for the Linux FHS read/write directories (/home, /srv, /tmp, /var) on a separate (logical or physical) disk from the potentially read-only rest of the root file system.
I know I can create four partitions on my second disk and use each partition for one of the beforementioned directories using mount. But I don't feel like determining the required storage space for the four directories upfront, even if I may be able to correct the sizes of the (logical or physical) partitions later.
Can this be achieved and how?
| How to split FHS read-only and read/write directories across two disks with Linux/systemd, without partitioning the raed/write disk? |
You should choose /var/lib.
/usr/com does not exist in FHS 2.3 or FHS 3. FHS 2.3 FHS 3.0
sharedstatedir is a concept in GNU autotools and GNU coding standards
GNU and freestandards.org do not always align.
The issue you mention came up in a 2006 mailing list post. In the case of Red Hat, the conclusion was to use /var/lib
Technically, if you're working on an open source project that defaults prefix to /usr/local, you could possibly use /var/local. But I don't believe anybody does that in practice. For one, note that /var/local is probably empty on your system. For two, note that as soon as you or anybody running ./configure changes prefix to /usr, you can't use /var/local, and the only remaining option is /var/lib.
|
I'm working on a FHS2-conforming application that used to store data in $sharedstatedir (i.e. $(prefix)/com, e.g. /usr/local/com).
This directory is no more in FHS 3.0, and it seems that we need to start using either/var/lib, which should storeVariable state informationor more verbosely,state information pertaining to an application or the system. State information is data that programs modify while they run, and that pertains to one specific host.or
/var/local, which should storeVariable data for /usr/local(No more information is given about /var/local.)Which one of these should we use?
Bonus question: Is there a variable for /var/lib / /var/local, similarly to sharedstatedir and friends that we should use, or should we simply hard-code the path into our makefiles?
| Where to store shared data in FHS 3.0? |
Yes, it is very similar to /mnt and is designed to contain nfs shared directories from remote hosts.
If there is a NFS server named nfsserver sharing a directory named shared-directory, you can access it just by listing or reading files in /net/nfsserver/shared-directory[/filepath].
This featured is provided by the automounter, was first implemented by Sun Microsystems in SunOS 4 (1988).
Unlike Linux, Solaris is documenting it in its file system hierarchy standard documentation:
$ man filesystem
...
/net Temporary mount point for file systems that are mounted
by the automounter.Note that the /net directory is not hardcoded and you can select a different one by editing the /etc/auto.master or /etc/autofs/auto.master configuration file. See for example this documentation page.
Note also that the same mechanism can be used to automount CIFS or fuse based (e.g. sshfs) file systems shares. See this Gentoo wiki page or that Ubuntu documentation one.
|
I notice on my system (Manjaro Linux) that:I have an empty directory named /net
This directory is not mentioned in the Filesystem Hierarchy StandardWhat is the intention behind this directory (quoting chapter and verse)?Is it like /mnt (which is for temporary mounts) but for network (eg sshfs, nfs) mountpoints?
Or, in other words, is it like /media, but for non-removable non-temporary mount points?
| Purpose of /net directory |
Considering /home is generally used for end users' home directories, it is not a good practice to mount general use filesystems in /home, as it may lead to a confusion later on with other sysadmins, whom, one day, will take over this system from you upon your departure for greener pastures.
I am not familiar with seafile-server, but assuming it is a 3rd party application and its related directory tree, then it is fine to mount it under /opt.
Having said all of this, mounting a directory on either /home or /opt, technically has no difference. Just make sure you are not doing nested mounts, i.e., mounting a filesystem over another filesystem. Even though this is technically possible, it lends itself to problems in the future, if the higher level filesystem decides to go bust one day. I know it is not your question, but just make sure, wherever you are mounting this filesystem under, is not a mounted filesystem, but a plain directory on root filesystem (i.e., under "/").
|
Reading Filesystem Hierarchy Standard I was considering those directories:/opt: Optional application software packages.
/home: Users' home directories, containing saved files, personal settings, etc.I was more inclined to use /opt but many tutorials use /home (e.g. Archlinux wiki)
QuestionWhere should I install seafile-server: /opt or /home? | Where should I install seafile-server: /opt or /home? |
"Qualifier"a word or phrase, especially an adjective, used to attribute a quality to another word, especially a noun.
(in systemic grammar) a word or phrase added after a noun to qualify its meaning.The strings 32 and 64 are qualifiers to the path /usr/lib that qualifies the path's use. With 32, making it /usr/lib32, it denotes the specific path for 32-bit (only) libraries, as the quoted text says, on 64-bit machines.As Stephen Kitt points out in comments below, other qualifiers than just "the number of bits on an architecture" may be found on some systems, especially on MIPS systems.
|
The Linux FHS (Filesystem Hierarchy Standard) refers to directories of the following form:
/lib<qual>It describes such directories as follows:There may be one or more variants of the /lib directory on systems which support more than one binary format requiring separate libraries. Similarly, it refers to directories:
/usr/lib<qual>And describes them as:/usr/lib performs the same role as /usr/lib for an alternate binary format, except that the symbolic links /usr/lib/sendmail and /usr/lib/X11 are not required.The FHS Wikipedia article gives the following alternate descriptions for these directories:/lib<qual>
Alternate format essential libraries. Such directories are optional, but if they exist, they have some requirements.
/usr/lib<qual>
Alternate format libraries, e.g. /usr/lib32 for 32-bit libraries on a 64-bit machine (optional).I'm assuming that the string <qual> is a mnemonic for something. Is it? If so, what does it represent?
| What does <qual> stand for in the FHS? |
When I add that path to PATH in /etc/environment, the user can call the script without providing the full path but the daemon can not; it just says "not found".According to this source, which is IBM AIX documentation (I could not find anything else) but is presumably true in general:1 The first file that the operating system uses at login time is the
/etc/environment file. The /etc/environment file contains variables
specifying the basic environment for all processes.Note that it is not sourced in any system wide .profile, so this is hard-coded somewhere. However, if it applies "at login time", it will not apply to a daemon, which is started by init and never logs in (although "for all processes" contradicts this, perhaps that was just a poor choice of words).
According to this superuser Q&A, /etc/environment is part of PAM, which supports the "at login" premise and again means it will not be used by init spawned daemons. There are a lot of other references for this too, but not, it seems, actual PAM documentation.should I rather always use full path names?This is the most common and generally recommended process -- it is possible for daemons to start with no $PATH at all. So you could set this yourself in a start-up script, or, as you say, use full path names as appropriate.
1. "/etc/environment" does not appear at all in what would seem to be the relevant POSIX specs [1] [2].
|
I want to add some scripts for custom administrative work to a Linux server (Ubuntu 12.04). Ultimately those scripts are callback scripts from at least one daemon (PostgreSQL in my case but that shouldn't matter). In order for the daemon to find my script, I must provide the full path; I used /opt/<package>/bin as per the FHS.
When I add that path to PATH in /etc/environment, the user can call the script without providing the full path but the daemon can not; it just says "not found".
So my question is basically twofold:How do I add paths to PATH for daemons?
Is it a good idea anyway? Or should I rather always use full path names? | How to add new elements to PATH for daemons (or other best practices)? |
I did a bit of research of my own. Main source: http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/c23.html
Long story short: there is nothing that prohibits a user from creating files. However, in standard linux FHS, only some directories are writeable by everyone. As long as you use a distribution that follows this convention, you should only check the following directories (as shown by a test on my own system):/dev/shm (mounted by default in some distributions)
User home directory
/var/tmp
/var/run/screen/S-rubenf
/tmp
/mnt/usb-disk (mounted with gid=users)Source:
find -type d |
while read DIR; do
if touch $DIR/test_can_be_removed123 2>/dev/null; then
rm $DIR/test_can_be_removed123
echo $DIR >> writable_directories
fi
done |
I need to ensure that, when deleting a specific user from a system, all of his/her files are removed. User creation/deletion will happen a lot on this system, so I want to reuse UID's and want to ensure the new user does not have access to any files of the old user.
My question is two-fold:Is there a general and easy way to find all files owned by a specific user? Or is a system-wide search -uid n my only option?
If a system-wide search is the only option, then which directories are generally writeable by a normal user (suppose a distribution following FHS)?His home directory
/tmp
??The user does not have sudo privileges, so he can only write in places that are world-writable in a standard Unix filesystem.
| Which directories are writeable in a system following FHS? |
In /var/www, create a folder public_html and reconfigure that as your root directory for Apache. Files in /var/www can be included, but not accessed directly.
|
I have a PHP script running on my LAMP server that requires certain files in order to produce HTML content. The script is publicly accessible (i.e. http://example.com/script.php) but the files required by the script must secured. I could probably place the files in any directory other than /var/www and they would be relatively safe, but I'm trying to learn the Linux filesystem and I'd like to use the most appropriate place (according to FHS). Any help would be great.
| Where to securely place files needed by PHP in Linux filesystem (according to FHS) |
/var is kind of global directory for a system, and all cache-data and logs of all users appear to be shared among them.This is partially incorrect. Not all cache and logs are stored under /var, and what is stored there is not necessarily shared by all users. Applications and/or the OS own what is stored in /var.
The only exception is a directory which is effectively shared and writable by all users: /var/tmp. Users and/or application deciding to store something here can still protect the subdirectories and files they create with unix file permissions.I mean that programs (packages) being launched on behalf of different users will use the same cache, let alone the fact that all local user can read logs and look through cache-data of each other.No, different users generally use different cache. There are also cases where a common cache is a plus.
When confidential / personal data is stored under /var, this data is protected by the application so the users are not granted the right to see other people's data; e.g. the mail spool is not readable by (non root) users.
|
I'm quite new in the UNIX world, so feel free to let me know if my question is silly.
So-called Filesystem Hierarchy Standard states that the /var directory is supposed to keep data like logs and caches of including but not limited to local packages:/var contains variable data files. This includes spool directories and files, administrative and logging
data, and transient and temporary files.
...
/var is specified here in order to make it possible to mount /usr
read-only. Everything that once went into /usr that is written to
during system operation (as opposed to installation and software
maintenance) must be in /varI'm primarily wondering how should it work in local multi-user system. /var is kind of global directory for a system, and all cache-data and logs of all users appear to be shared among them. Isn't it considered wrong in any way? I mean that programs (packages) being launched on behalf of different users will use the same cache, let alone the fact that all local user can read logs and look through cache-data of each other.
Please help me to understand that concept. Thanks.
| Why isn't the var directory user-specific? |
You are exactly right about /etc/systemd/system/*.service being an appropriate place for your custom service file.
As for your script, put it in /usr/local/... instead of /usr/.... That's because this script is written by you. Here are some reasons why this convention exists:Files in /usr/{bin,lib,share}/ are owned by the package manager. You should be able to dpkg -S <file>, pacman -Qs <file> or rpm -q --whatprovides <file> on any file out which package owns it.
You can remember what files are "yours" (i.e. if you rebuild this machine, it's easy to see what you need to copy over or what you can simply get from a package)
If you install a package, a generic-named script will not be overwritten by the package.Now, you need to choose the next directory. Options are:/usr/local/bin: is appropriate if your script is executable (e.g., compiled binary or uses a shebang like #!/bin/bash or #!/usr/bin/python3) and can be run by any user. I tend to use this a lot because it is quite simple.
/usr/local/sbin: This is appropriate if your script meets the needs of /usr/local/bin, but shouldn't be run by users. Instead it should be run by a system administrator only. /usr/local/sbin is often only included in the $PATH of a sudo environment.
/usr/local/lib: On some systems (e.g. Debian-based systems), this directory may include internal binaries that are not intended to be executed directly by users or shell scripts. However on other systems (i.e. Redhat-based systems), lib only includes object files and libraries.
/usr/local/libexec: This directory includes internal binaries that are not intended to be executed directly by users or shell scripts. This is only supported by some distributions.
/usr/local/share/<ServiceName>/. This only applies if your script is not executable and is architecture independent (i.e. not compiled). You have a *.sh script, but if there is no shebang and it is not executable, then you'll need to run it by calling the interpreter and passing the path to this script as an argument. /bin/bash /path/to/script.sh. In this case, the script could be considered a read-only architecture independent data file which should go in /usr/local/share. It's a good idea to put it down one further subdirectory to avoid cluttering /usr/local/share. Any documentation you choose to write for your service can also go in share. |
I have recently created my first custom systemd service to run a script early in my machine's boot sequence. The custom .service file is being copied to /etc/systemd/system, which I understand to be the correct location for custom services which are not deployed via packages or part of the operating system distribution.
It's a oneshot type service, which invokes a shell script to dynamically set the hostname, prior to the networking stack and dhcpcd being started. Here's the service definition:
[Unit]
Description=Set hostname on startup, based on hardware serial number
Wants=local-fs.target
After=local-fs.target
Before=systemd-hostnamed.service network.target[Service]
Type=oneshot
ExecStart=/SOMEPATH/hostname-from-serialnumber.sh[Install]
WantedBy=systemd-hostnamed.service network.targetI'm not sure what the best location for the shell script is, so I've put a placeholder SOMEPATH in the code block above. What is the correct location for this shell script, and why?
| Where should a shell script for a custom systemd service be installed? |
Try:
grep -w sed /etc/init.d/*
grep -w sed /etc/grub.d/*
grep -w sed /usr/bin/*The first yields 25 scripts and the second 47 on my system (debian).
The -w option restricts grep to looking for sed as a whole word. This way, matches to words like used or supposedly are avoided.
|
When I was studying bash, I found it very helpful to go and study the bash scripts already present on a clean install of Linux—/etc/profile, for instance, and anything in /etc/rc.d/init.d/. These are often quite advanced scripts, and by studying them I ensured I learned about many obscure features not covered in most bash tutorials.
I am studying sed now, and although the list of sed features is much shorter (so I know for a fact I have studied all of the features), I still feel it would be very beneficial to study through some sed scripts that are actually used in production, and are not just examples in tutorials.
To that end, I would really like to study any sed scripts that are already present on a well-known Linux distro, such as Ubuntu or CentOS. Trouble is, I have no idea where such scripts might be. I've already tried file /bin/* | grep script | sed 's/:.*//' | xargs grep sed with no results. file /sbin/* | grep script | sed 's/:.*//' | xargs grep -c sed returns some results, and I'm looking through those, but all the ones I have checked so far are just sed one-liners embedded in bash or sh scripts.
Where can I find some actual sed scripts on my Linux machine? Or failing that, is there a good place online to find some sed scripts that are actually used in production? (Reading through sedtris won't help much for my purposes. ;)
| Are there any sed scripts built in to mainstream Linux distros? |
It's really up to you but normally sources are stored in /usr/src as mentioned in the comments. Since system-wide installed applications could install their sources in this directory as well, to avoid possible conflicts, you could use /usr/local/src instead but again in the end no one stops you from storing sources anywhere you want as long as you remember where they are and there are no conflicts. You may as well create /src - easy to find, easy to cd to.
|
I have some sources I want to compile using make. The sources will be compiled into a driver I'm going to use. What is the correct place for such files? /usr/share? /opt? /usr/local/...?
Edit: the driver is going to be a kernel driver, and I'll be using dkms for the installation. The distro I'll be using is Ubuntu, but I'll might also use it for other distros in the future
| What is the correct place for driver sources? |
It is of course possible to change this permission but inadvisable.
The basic principle here is that root is NOT to be used as a regular user. You only login as root to perform security sensitive operations such as system upgrades. Therefore anything you must do as root should not in general be viewable by other users.
On that bases root's working area should remain strictly off limits to provide you with a safe space to work. This goes doubly for some automatically generated files which by default get written to a user's home. For example ~/.bash_history may inadvertently expose sensitive information. Better to black out the whole home directory than risk comprising your system.
If you are not forced to do something as root then don't. If root must share something then create a new directory (maybe in /usr/share) and create appropriate new groups to manage access.
|
I just noticed that the /root/ has 700 permission by default on Ubuntu, Debian as well as Nixos. Why is this handled differently than other directories for example /bin/?
What is so special about /root/ besides just being the home directory of the root user?
I wanted to give a user permission to view a directory within /root - but that requires executable permission set on the directory itself. (Do the parent directory's permissions matter when accessing a subdirectory?)
| Why does the /root/ directory have 700 permission by default? |
Forget the wrapper stuff:-)
All you need is a .file (dot file) with the user configuration options, in the $USER directoy. You can have one in /etc for system wide config options as well.
Make your script check for these .fils (dot file) and if they exist, use them.
HTH,
.
|
I created some scripts for administrative tasks etc, I made them to be independent from environment - every dependency is injected through arguments. However it is annoying to provide to script commonly used dependencies every time I run it, and I don't want to hardcode in it any local information, so I created wrappers. I put my general scripts in $HOME/bin but where should I put wrappers that contain local information and are only for speeding up invocation?
Example:
Think about script that makes and sends to given ftp server encrypted system backup. It was made as a generic script that can be used with any gpg public keys or ftp servers, however I'm using always only specific public key and uploading it only to specific ftp server, so I created a wrapper with this information. This generic script is actually in /root/bin as this is administrative tool, but where to put that wrapper?
| Where to put wrapper scripts? |
Here it is: The FHS 2.3 Specification
| Possible Duplicate:
Resources to learn linux architecture in detail? I migrated to UNIX (Linux, Ubuntu) and I'm trying to understand the organisation of files and directories. I stumbled upon File Hierarchy Standard (quite old it seems) and it made me wonder if this is the ACTUAL standard that is used.
Also may I ask if additional links to resources to broaden my knowledge (and everyone that asks questions about FHS) on these wonderful NIX* environments.
| Where can I find the Official File Hierarchy Standard for UNIX? [duplicate] |
Yes, if they are empty, it’s safe to remove them. If you actually install packages which ship them (e.g. libc6-x32) and remove them, the directories are removed; so removing them manually in your case is fine.
|
Is it safe to remove /usr/lib32, /usr/libx32 directories and their links on /lib32, libx32, on 64bit only (no multiarch enabled) Debian Linux 10? They are empty.
Since this new file system hierarchy is used, I'm considering removing them to reduce clutter in /.
| Is it safe to remove /usr/lib32 and /usr/libx32, on 64bit only Debian Linux 10 |
Please note that the installed glibc is an incomplete C runtime.
In order to complete the C runtime you may need to copy in
additional headers that match the compiler you are using since the use
of --sysroot will restrict their lookup to the sysroot.It is very possible to have multiple versions of glibc on the same system (we do that every day).
However, you need to know that glibc consists of many pieces (200+ shared libraries) which all must match. One of the pieces is ld-linux.so.2, and it must match libc.so.6, or you'll see the errors you are seeing.
The absolute path to ld-linux.so.2 is hard-coded into the executable at link time, and can not be easily changed after the link is done.
To build an executable that will work with the new glibc, do this:
g++ main.o -o myapp ... \
-Wl,--rpath=/path/to/newglibc \
-Wl,--dynamic-linker=/path/to/newglibc/ld-linux.so.2The -rpath linker option will make the runtime loader search for libraries in /path/to/newglibc (so you wouldn't have to set LD_LIBRARY_PATH before running it), and the -dynamic-linker option will "bake" path to correct ld-linux.so.2 into the application.
If you can't relink the myapp application (e.g. because it is a third-party binary), not all is lost, but it gets trickier. One solution is to set a proper chroot environment for it. Another possibility is to use rtldi and a binary editor.
SOLUTION #1
LD_PRELOAD='mylibc.so anotherlib.so' programSolution #2
compile your own glibc without dedicated GCC and use it
This setup might work and is quick as it does not recompile the whole GCC toolchain, just glibc.
But it is not reliable as it uses host C runtime objects such as crt1.o, crti.o, and crtn.o provided by glibc. This is mentioned at: https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location Those objects do early setup that glibc relies on, so I wouldn't be surprised if things crashed in wonderful and awesomely subtle ways.
For a more reliable setup, see Setup 2 below.
Build glibc and install locally:
export glibc_install="$(pwd)/glibc/build/install"git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout glibc-2.28
mkdir build
cd build
../configure --prefix "$glibc_install"
make -j `nproc`
make install -j `nproc`Setup 1: verify the build
test_glibc.c
#define _GNU_SOURCE
#include <assert.h>
#include <gnu/libc-version.h>
#include <stdatomic.h>
#include <stdio.h>
#include <threads.h>atomic_int acnt;
int cnt;int f(void* thr_data) {
for(int n = 0; n < 1000; ++n) {
++cnt;
++acnt;
}
return 0;
}int main(int argc, char **argv) {
/* Basic library version check. */
printf("gnu_get_libc_version() = %s\n", gnu_get_libc_version()); /* Exercise thrd_create from -pthread,
* which is not present in glibc 2.27 in Ubuntu 18.04.
* https://stackoverflow.com/questions/56810/how-do-i-start-threads-in-plain-c/52453291#52453291 */
thrd_t thr[10];
for(int n = 0; n < 10; ++n)
thrd_create(&thr[n], f, NULL);
for(int n = 0; n < 10; ++n)
thrd_join(thr[n], NULL);
printf("The atomic counter is %u\n", acnt);
printf("The non-atomic counter is %u\n", cnt);
}Compile and run with test_glibc.sh:
#!/usr/bin/env bash
set -eux
gcc \
-L "${glibc_install}/lib" \
-I "${glibc_install}/include" \
-Wl,--rpath="${glibc_install}/lib" \
-Wl,--dynamic-linker="${glibc_install}/lib/ld-linux-x86-64.so.2" \
-std=c11 \
-o test_glibc.out \
-v \
test_glibc.c \
-pthread \
;
ldd ./test_glibc.out
./test_glibc.outThe program outputs the expected:
gnu_get_libc_version() = 2.28
The atomic counter is 10000
The non-atomic counter is 8674Command adapted from https://sourceware.org/glibc/wiki/Testing/Builds?action=recall&rev=21#Compile_against_glibc_in_an_installed_location but --sysroot made it fail with:
cannot find /home/ciro/glibc/build/install/lib/libc.so.6 inside /home/ciro/glibc/build/installso I removed it.
ldd output confirms that the ldd and libraries that we've just built are actually being used as expected:
+ ldd test_glibc.out
linux-vdso.so.1 (0x00007ffe4bfd3000)
libpthread.so.0 => /home/ciro/glibc/build/install/lib/libpthread.so.0 (0x00007fc12ed92000)
libc.so.6 => /home/ciro/glibc/build/install/lib/libc.so.6 (0x00007fc12e9dc000)
/home/ciro/glibc/build/install/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc12f1b3000)The gcc compilation debug output shows that my host runtime objects were used, which is bad as mentioned previously, but I don't know how to work around it, e.g. it contains:
COLLECT_GCC_OPTIONS=/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crt1.oSetup 1: modify glibc
Now let's modify glibc with:
diff --git a/nptl/thrd_create.c b/nptl/thrd_create.c
index 113ba0d93e..b00f088abb 100644
--- a/nptl/thrd_create.c
+++ b/nptl/thrd_create.c
@@ -16,11 +16,14 @@
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */+#include <stdio.h>
+
#include "thrd_priv.h" int
thrd_create (thrd_t *thr, thrd_start_t func, void *arg)
{
+ puts("hacked");
_Static_assert (sizeof (thr) == sizeof (pthread_t),
"sizeof (thr) != sizeof (pthread_t)");Then recompile and re-install glibc, and recompile and re-run our program:
cd glibc/build
make -j `nproc`
make -j `nproc` install
./test_glibc.shand we see hacked printed a few times as expected.
This further confirms that we actually used the glibc that we compiled and not the host one.
Tested on Ubuntu 18.04.
Sources:
https://stackoverflow.com/questions/847179/multiple-glibc-libraries-on-a-single-host/851229#851229
https://sourceware.org/glibc/wiki/Testing/Builds?action=recallrev=21#Compile_against_glibc_in_an_installed_location
|
I have a bit of a struggle with using avrdude to flash my microcontroller.
It dependends of libm.so.6 GLIBC_2.29 which it cannot find. It looks under /usr/lib/libm.so.6 where this file does actually resides BUT it also resides in /lib/lib.so.6.
So as I was running
sudo pacman -S glibc to install/update the library https://www.archlinux.org/packages/core/x86_64/glibc/
I am very sure I installed it only to /lib/.
But since avrdude is looking into /usr/lib it still won't find it. I have a hard time to understand the sense of these two directories since it kinda screws up things than helps for my case.
How can I do it properly?
EDIT
I wanted to do something stupid so I did cp /lib/libm.so.6 /usr/lib/libm.o.6 but the cp command tells me the files are the same.
Now I do not understand why avrdude can not find the right version of GLIBC since it is updated properly (as far as I can see that).
| GLIBC_2.29 can not be found for avrdude even after downloading it |
No. /etc/vfstab is used by Solaris but it is specific to all SVR4 systems.
This is the equivalent of the UNIX-es/Linux/*BSD /etc/fstab. In fact the old SunOS 4.x was using /etc/fstab as well.
Here is a small list of known equivalent for /etc/fstab used by other proprietary OS:IBM AIX (3.x and 4.x): /etc/filesystems
HP-UX (up to 9.x): /etc/checklist
Solaris (since 2.x): /etc/vfstab
Sco Unix: /etc/vfstabEdit: fixing answer due to the underlying question (OS identification).
|
Is file /etc/vfstab Solaris specific. I mean it only exists in SunOS/Solaris and there is no such file in other UNIX-es/Linux/*BSD.
| Is file /etc/vfstab Solaris specific? |
The current line of thought is that /usr should be integrated to /, and a few distributions simply symlink /bin,/sbin,/lib to the equivalents in /usr as you saw. E.g. Debian, which will only support merged /usr starting with the next release (bookworm), and Fedora appears to have done the merge in Fedora 17, already in 2012.
Basically, the arguments to that seem to boil down toit's the job if initramfs to bring the system to a usable state anyway, so no need for another "minimal" system to do that
the split doesn't really work anyway, since programs installed in /usr/bin may depend on libraries in /lib, so they aren't totally independent anyway
the whole idea of the split is a historical accident that just kept going after the original reasons had turned irrelevant.The / filesystem isn't that minimal anyway, e.g. on one dated pre-merge system, I have a bit less than 600 MB of files in /bin, /sbin and /lib, and a bit more than 600 MB in /usr. (No X or GUI stuff there, though.) On another system, /usr is a bit bigger, at around 2 GB, but even that is not remotely an issue with current storage sizes. At the same time, the initramfs files are about 15 MB, which is smaller enough that it might matter, and the initramfs doesn't have stuff like /etc that needs to be modifiable so it can be semi-statically installed.
Looking at the same system, the split of utilities between /bin and /usr/bin also seems a bit arbitrary, e.g. sh and bash are in /bin (obviously), and so is grep, but e.g. awk, head, wc and zsh are in /usr/bin. Not that modern systems rely that much on shell scripts to boot, and it's only the stuff that's needed to mount the remaining filesystem, that need to use the more limited set of tools, but anyway. And an administrator who likes zsh might not be happy if they were unable to log in when there's an issue with the filesystem...
The story about the origins of the split seems to be this:When the [original Unix system, in early 1970s] grew too big to fit on the first RK05 disk pack (their
root filesystem) they let it leak into the second one, which is where all the
user home directories lived (which is why the mount was called /usr). They
replicated all the OS directories under there (/bin, /sbin, /lib, /tmp...) and
wrote files to those new directories because their original disk was out of
space.(From Rob Landley, on Busybox mailing lists, Understanding the bin, sbin, usr/bin , usr/sbin split, edited slightly. The same message discusses the matter in a lot more detail.)
The systemd folks have also written some thoughts on that:"Booting Without /usr is Broken", and
"The Case for the /usr Merge" |
I read an article and it states that "bin" "dev" "etc" "lib" "root" "sbin" directories should be in the same filesystem as the root directory, that is, they should not be mounted as a separate filesystem.
I'm a little confused, for example, in many distributions now the "sbin" "bin" and "lib" directories are symbolic linked to the "usr" directory.
lrwxrwxrwx 1 root root 7 Apr 30 18:19 bin -> usr/bin
...
lrwxrwxrwx 1 root root 7 Apr 30 18:19 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Apr 30 18:19 lib32 -> usr/lib32
lrwxrwxrwx 1 root root 9 Apr 30 18:19 lib64 -> usr/lib64
lrwxrwxrwx 1 root root 10 Apr 30 18:19 libx32 -> usr/libx32
...
lrwxrwxrwx 1 root root 8 Apr 30 18:19 sbin -> usr/sbinHowever, the article does not specify that the "/" directory and the usr directory must be on the same filesystem. I've also looked at the previous questions, but I'm still confused. Which directories must be in the same filesystem and which should we keep in separate filesystems?
| Which directories should be on the same filesystem as the root "/" filesystem? |
You may find /usr unaesthetic, but that is the way the universe works. Almost every Unix out there has /usr/bin/env —as far as I know, the only extant Unix that doesn't is SCO OpenServer, and it's not a big extant. By not having /usr/bin/env, you aren't just violating the FHS, you're violating an extremely widespread convention. /usr/bin/env is a standard location, even if that standard isn't written. It is not meant to be a configurable location: /usr/bin/env is the one location that everybody can assume to exist.
Whether you like it or not, the solution is to arrange to have /usr/bin/env. Removing the /usr hierarchy is fine, but if you do that, make /usr a symbolic link to /.
If you're going to mount a /usr over the network, then:Make /usr a directory that contains a symbolic link bin -> ../bin.
Make sure that on the filesystem that you mount on /usr, the file bin/env is either a symbolic link to /bin/env or a working env program. |
I'm building a Linux system that doesn't have a /usr directory. Getting the toolchain to work was surprisingly easy, but I'm hitting this irritation with a lot of auto* scripts: configure, etc. often seem to assume env is in /usr/bin.
A workaround is to do ln -sv .. /usr during the build, but obviously that's aesthetically unappealing and runs the risk of a path with /usr in it leaking into the final system. (There will be a network mounted /usr in production, and I don't want the base system to even know it exists.)
Did I install my autotools wrong, or is this just an irritating assumption configure often makes? Am I breaking FHS by not putting env in /usr/bin? (That's not a deal breaker for me; I'm already breaking it by having /inc and /share.)
| Hardcoded /usr/bin/env in configure scripts |
Does the kernel assume it's existence?No; the kernel assumes very little about the contents of the file systems it’s used with. The kernel will look for /etc/init in some circumstances (along with /sbin/init etc.), and a few staging drivers use helpers in etc, but the kernel will work fine without /etc.GLibc certainly has "/etc/hosts" hard-coded in the source.Yes, it does.How many of these paths are there in glibc? Is there a list somewhere? How hard would if be to change them?In my copy of the repository, grep -r '"/etc' finds 86 instances. I’m not aware of a maintained list anywhere. Changing the paths in the source code isn’t too hard.How much of the software on a bare bones embedded linux image is looking in there?That’s harder to determine. Programs which expect their own configuration in /etc will certainly look there; many others will indirectly, through libraries, if only the C library. The former would require changes to use another path; the latter would pick up any changes made to the libraries in question.How would we find out?grep... See also the corresponding source code search in Debian.Could we change it without recompiling?No. A related question is how much change would be involved before recompiling. Autoconf-produced configure scripts often support a sysconfdir option which defaults to /etc or /usr/local/etc, but that doesn’t handle all situations — in many cases it determines where a program installs and looks for its own configuration files, but it won’t change where a program looks for other files it expects in /etc (see the GNU C library).And then, how much more is there in, for example, Ubuntu, that would have to be changed?Less and less as you move further “up the stack”, I suspect. Anything user-configurable will have at most defaults in /etc, and many programs nowadays don’t have any configuration in /etc at all, or if they do it’s provided through some mechanism other than looking directly in /etc.But if the Linux ecosystem were set up to make such a thing feasible, what might that look like?It might look like one of the freedesktop.org specifications which determine how programs find files (XDG etc.).
|
Aside from the kernel itself, the Filesystem Hierarchy Standard is perhaps the only major feature common to all linux systems. Some obscure distributions modify it only slightly: stali, for example, uses a simplified version, while nixos adds too it (leaving /etc and /bin out of sight and out of direct control). Both, however, have a "/etc." You can swap out your init system, change your PATH, replace glibc with musl, pick a different window manager, desktop environment, display server, compositor, etc... But if you sit down at a running working linux machine, "/etc" will be there. You can be sure of that.
Which begs the question. What if you wanted to put it somewhere else...
In how many places, at how many levels, is the assumption of "/etc" made.
Does the kernel assume it's existence?
GLibc certainly has "/etc/hosts" hard-coded in the source. How many of these paths are there in glibc? Is there a list somewhere? How hard would if be to change them?
How much of the software on a bare bones embedded linux image is looking in there? How would we find out? Could we change it without recompiling? And then, how much more is there in, for example, Ubuntu, that would have to be changed?
Incidentally, I recall one of the first things I wanted to do when I first installed Ubuntu 10 years ago was to move /etc to /conf, I quickly discovered it didn't work like that. But if the Linux ecosystem were set up to make such a thing feasible, what might that look like?
This is a general curiosity, I'm asking in order to better understand the details of the linux ecosystem. Of course I'm not expecting a complete answer, but I figure someone might have some useful information or could point me in the right direction.
| What if I wanted /etc to be called something else? |
I think what it means is only that distros shouldn't assume that an installation has sole ownership of /usr, not that everything in /usr is expected to work with all FHS-compliant systems. I think I have heard of /usr being served over the network (via NFS for example) for a bunch of systems running the same distro. Since /usr is where the bulk of all installed files reside, this makes for a lot of space savings. Also, I think it's not unusual to have /usr a separate filesystem in any case, mounted read-only for additional security, so the "must not be written to" part helps with that as well.
/etc can't be shared in this manner - some files, like /etc/hostname are necessarily different for each host (though most files in /etc can be so shared, I think). Nor can /var - it wouldn't make sense to have two services on different systems logging to the same file, for example.
|
FHS-3.0 describes it as:Shareable, readonly data. That means that /usr should be shareable between various FHS-compliant hosts and must not be written to.I am a bit confused by what this means. Does this means that the binaries or whatever other files inside should be copy-pasteable onto another machine, and have them function perfectly fine?
| What is the /usr directory in Linux? |
Section 4.4.2 of the FHS, version 3.0, specifically statesThere must be no subdirectories in /usr/bin.Since you’re using Lintian, I suppose you’re targeting Debian or a derivative; in such an environment, the appropriate location for your binaries is a package-specific subdirectory of /usr/lib. Debian and its derivatives don’t use /usr/libexec.
|
Lintian tag description:The Filesystem Hierarchy Standard forbids the installation of new directories in /usr/bin other than /usr/bin/mh.However, all I can find the linked document is This is the primary directory of executable commands on the system.This allows executable commands to go there, but it does not forbid anything. What paragraph doees Lintian refer to?
The reason I like to put a subdirectory there is that I have a wrapper script, that the user uses instead of the binary, and I want the wrapper script to work without changes when "installing" the program. In short, the script looks like
options=()
debug=0
mode="rel"
for option in "$@"; do
if [ "$option" == "--debug" ]; then
debug=1
mode="dbg"
else
options+=("$option")
fi
donecurrent_dir=$(dirname "`readlink -f "${BASH_SOURCE[0]}"`")
binary="$current_dir"/__anja_"$mode"_"$arch"/anjaif [ $debug -eq 1 ]; then
gdb --args "$binary" "${options[@]}"
else
exec "$binary" "${options[@]}"
fiwhere arch is deduced from /proc/cpuinfo. The build system emits the binary in the directory __anja_"$mode"_"$arch", in the project root directory.
Yes, the correct place for the real binaries is /usr/libexec, but then the script must be changed during the installation procedure.
| Is subdirectory in /usr/bin really forbidden by FHS |
Yes, it makes sense to put them in /usr/lib, at least in some cases. According to the FHS, you should use an application-specific directory under /usr/lib, e.g. /usr/lib/yourapp. You can structure content there however you wish (see The Gimp, whose plugins end up in /usr/lib/gimp/2.0/plug-ins, and many other examples in your own /usr/lib).
The usual general rules for /usr apply: if the software is packaged using the system's package manager, then /usr/lib may be appropriate; otherwise it should go in /usr/local/lib (following the same pattern). (Thanks to mobileink and Barafu Albino for the reminder.)
|
Where should I deploy application plugins? Nobody will ever use them. Does it make sense to put them in /usr/lib?
| Where to place application plugins (.so)? |
On any systemd-based system, the location matching your requirements most closely is your subdirectory of /run/user, or rather, the directory indicated by $XDG_RUNTIME_DIR. This is flushed whenever the owning user session stops (typically, when the user logs out).
As far as the FHS goes, it doesn’t specify storage properties apart from durability; the appropriate location according to that is /tmp.
|
I would previously just use /tmp however this seems to persist after boot, it also seems to be have the disadvantage about literally writing to the disk as opposed to a ramdisk / tmpfs.
I thought perhaps /run/ but this seems (at least on my Nixos system) to be owned by the root user.
Is there any recommended directory for this use case?
| Is there a recommended path for storing temporary files in a tmpfs/ramdisk which also does not need to be persisted after boot? |
How could I have deduced that location without random googling, given that there is no unit called "tmpfiles"?% apropos tmp -l
systemd-gpt-auto-generator (8) - Generator for automatically discovering and mounting root, /home/, /srv/, /var/ and /var/tmp/ partitions, as well as discovering and enabling swap partitions, based on GPT partition type GUIDs.
systemd-tmpfiles (8) - Creates, deletes and cleans up volatile and temporary files and directories
systemd-tmpfiles-clean.service (8) - Creates, deletes and cleans up volatile and temporary files and directories
systemd-tmpfiles-clean.timer (8) - Creates, deletes and cleans up volatile and temporary files and directories
systemd-tmpfiles-setup-dev.service (8) - Creates, deletes and cleans up volatile and temporary files and directories
systemd-tmpfiles-setup.service (8) - Creates, deletes and cleans up volatile and temporary files and directories
systemd-update-utmp (8) - Write audit and utmp updates at bootup, runlevel changes and shutdown
systemd-update-utmp-runlevel.service (8) - Write audit and utmp updates at bootup, runlevel changes and shutdown
systemd-update-utmp.service (8) - Write audit and utmp updates at bootup, runlevel changes and shutdown
tmpfiles.d (5) - Configuration for creation, deletion and cleaning of volatile and temporary files
utmpdump (1) - dump UTMP and WTMP files in raw formatIgnoring the utmp entries, man 8 systemd-tmpfiles is the same as the other systemd-tmpfiles-* manpages, and it refers to man 5 tmpfiles.d, which has:
/etc/tmpfiles.d/*.conf
/run/tmpfiles.d/*.conf
/usr/lib/tmpfiles.d/*.conf~/.config/user-tmpfiles.d/*.conf
$XDG_RUNTIME_DIR/user-tmpfiles.d/*.conf
~/.local/share/user-tmpfiles.d/*.conf
…
/usr/share/user-tmpfiles.d/*.confThe first set being for system configuration, and the second set for user configuration.As to the logic, systemd configuration is generally in /usr/lib (or /lib, depending on the distro and /usr unification, but consistent within a distro), with corresponding override directories in /etc and /run, so there's nothing particularly surprising about that.
|
I was searching for the configuration for purging the /tmp and /var/tmp directories on a CentOS 7 default installation. After some searching, I came across the file /usr/lib/tmpfiles.d/tmp.conf which contains the actual retention periods.
I'm interested to learn the logic behind that file's placement. How could I have deduced that location without random googling, given that there is no unit called "tmpfiles"?
| Systemd: Logic behind configuration files in /usr/lib/ |
The FHS is defining directory names and usage. Creating a custom directory directly under the root one is considered risky as it might conflict with a future version of the standard or with a new OS owned directory.
Unlike many other Unix and Unix like OSes file system standards (e.g. freeBSD and Solaris), the FHS fails for some reason to define /net as a generic mount point for automounted NFS shares. On the other hand, the FHS defines /mnt and /media for a similar but distinct purposes.
While /media is for locally attached devices like CD, DVDs and thumb drives, /mnt doesn't restrict the kind of device so should theoretically be usable to store your sshfs mount, for example in /mnt/sshfs/xxx, but creating an exclusive subdirectory under /mnt might conflict with existing admin usage so I wouldn't recommend doing it. /mnt is defined to hold file systems temporarily mounted here by the administrator, which doesn't exactly match file systems automatically mounted by a daemon.
There is no way to use /net to store sshfs mounts as autofs configuration is forbidding to have multiple handlers for the same mount point.
As auto.smb is suggesting /cifs for its root mount point directory, I would simply use /sshfs. The risk for /sshfs to clash in the future with an OS owned directory is essentially zero.
Excerpt from the auto.smb manual page:
# Put a line like the following in /etc/auto.master:
# /cifs /etc/auto.smb --timeout=300
Excerpt for the auto.master default configuration file:
# NOTE: mounts done from a hosts map will be mounted with the
# "nosuid" and "nodev" options unless the "suid" and "dev"
# options are explicitly given.
#
# /net -hosts
|
I have some sshfs mounts which I want to put in a Linux filesystem location following the Filesystem Hierarchy Standard.
The standard is strangely silent on where network mounts should be placed:media Mount point for removeable media
mnt Mount point for mounting a filesystem temporarilyMounting under /net could conflict with NFS autofs mounts from the same hostname.
Where is a sensible place to put sshfs mounts given that creating directories directly under / is frowned upon?
| Where should sshfs mounts be placed in the filesystem? |
Temporary files whose lifetime doesn't exceed that of the program that creates them, and in particular aren't supposed to survive a reboot, go into /tmp. Or rather, the convention is to use the directory indicated by the environment variable TMPDIR, and fall back to /tmp if it isn't set.
You can execute files in /tmp. While a system administrator could mount it without executable permissions, this would be a hardening configuration for a system that only runs specific applications: it is to be expected that preventing execution under /tmp would break some applications, and it typically wouldn't improve security anyway.
Keep in mind that this directory is often shared between users, so you need to be careful when creating files there not to accidentally start using an existing file owned by another user. Use the mktemp utility or the mkstemp function, or better, create a private temporary directory with mktemp or mkdtemp and work in that directory.
/run or /var/run are not appropriate because you may not have the permission to create files there (in fact, you will not have the permission to create files there unless granted by the system administrator). They're for system use, not for applications.
|
I am writing a piece of software (C++) which generates Python scripts.
Where should these temporarily existing scripts be placed in the file system?
I read a couple pages about the Filesystem Hierarchy Standard, but I didn't find any reference to generated scripts.
/usr/bin does not seem to be a good idea as it might be read-only for certain users. My idea would be to place them under /var/run/my_program.
Is that right/ok? Or what is the "right place"?
Edit:
The scripts are only used while the creating programs runs. That means they do not have to live past a reboot.
| Where should generated scripts be placed in the filesystem? |
The FHS also says "/boot stores data that is used before the kernel begins executing user-mode programs".
In case of GNU GRUB, the GRUB modules (normal.mod for example) are stored in a sub-directory of /boot, specifically at /boot/grub/<GRUB architecture name>/.Programs necessary to arrange for the boot loader to be able to boot a file must be placed in /sbin.In other words, /sbin is the place for programs that are not needed at boot time, but are needed to install or (re)configure a bootloader, like grub-install and grub-mkconfig.
Note that since the release of FHS 3.0 in 2015, several distributions have decided to start the process of merging /bin, /lib and /sbin to /usr/bin, /usr/lib and /usr/sbin respectively. The expected result is to eventually have /bin, /lib and /sbin as backwards-compatibility symlinks to the corresponding directories under /usr, so in modern systems, the bootloader installation/configuration tools may in fact be found in /usr/sbin already.
|
In Filesystem Hierarchy Standard it is said as to the /boot folder that it must contain static files of the boot loader. And there is another rule which states thatPrograms necessary to arrange for the boot loader to be able to boot a
file must be placed in /sbin.Can someone explain what this line is about, probably by providing a few examples of programs concerned?
| Misunderstanding of a rule about /boot folder (FHS) |
You will propably have to run something like
rsync -avun --delete in both directions.
But what are you actually trying to accomplish?
Update:
rsync -avun --delete $TARGET $SOURCE |grep "^deleting "
will give you a list of files that do not exist in the target-directory.
"grep delet" because each line prints : deleting ..file..
rsync -avun $SOURCE $TARGET will give you a list of "different" files (including new files).
|
Is it possible to compare two directories with rsync and only print the differences? There's a dry-run option, but when I increase verbosity to a certain level, every file compared is shown.
ls -alR and diff is no option here, since there are hardlinks in the source making every line different. (Of course, I could delete this column with perl.)
| rsync compare directories? |
rsync compares only metadata by default.
rsync -n -a -i --delete source/ target/explanation:-n compare but do not actually copy or delete <-- THIS IS IMPORTANT!!1
-a compare all metadata
-i print one line of information per file
--delete also report files which are in target but not in sourcenote: it is important to append the directory names with a slash. this is an rsync thing.
also note: rsync is a powerful tool. some explanation above is crudely simplified for the context of this question. especially -a is much more complex than just "all metadata".
you can shorten the one letter options like this
rsync -nai --delete source/ target/you can provide -i twice to also have information printed for files that are identical
rsync -naii --delete source/ target/example output:
.d..t...... ./ (directory with different timestamp)
>f.st...... modifiedfile (file with different size and timestamp)
>f+++++++++ newfile (file in source but not in target)
*deleting removedfile (file in target but not in source)
.f samefile (file that has same metadata. only with -ii)remember that rsync only compares metadata. that means if the file content changed but metadata is still the same then rsync will report that file is same. this is an unlikely scenario. typically if data changes then metadata will also change. so either trust that when metadata is same then data is same, or you have to compare file data bit by bit.
bonus: for progress information see here: Estimate time or work left to finish for rsync?
|
With
diff -rI can do this task, however it takes so long because diff checks file's content.
I want something that determine that two files are the same regarding of their size, last modified, etc. But no checking bit by bit the file (for example a video takes sooo long).
Is there any other way?
| Compare directories but not content of files |
czkawka is an open source tool which was created to find duplicate files (and images, videos or music) and present them through command-line or graphical interfaces, with an emphasis on speed. This part from the documentation may interest you:Faster scanning for big number of duplicates
By default for all files grouped by same size are computed partial hash(hash from only of 2KB each file). Such hash is computed usually very fast, especially on SSD and fast multicore processors. But when scanning a hundred of thousands or millions of files with HDD or slow processor, typically this step can take much time.With the GUI version, hashes will be stored in a cache so that searching for duplicates later will be way faster.Examples:
Create some test files:
We generate random images, then copy a.jpg to b.jpg in order to have a duplicate.
$ convert -size 1000x1000 plasma:fractal a.jpg
$ cp -v a.jpg b.jpg
'a.jpg' -> 'b.jpg'
$ convert -size 1000x1000 plasma:fractal c.jpg
$ convert -size 1000x1000 plasma:fractal d.jpg
$ ls --size
total 1456
364 a.jpg 364 b.jpg 364 c.jpg 364 d.jpgCheck only the size:
$ linux_czkawka_cli dup --directories /run/shm/test/ --search-method size
Found 2 files in 1 groups with same size(may have different content) which took 361.76 KiB:
Size - 361.76 KiB (370442) - 2 files
/run/shm/test/b.jpg
/run/shm/test/a.jpgCheck files by their hashes:
$ linux_czkawka_cli dup --directories /run/shm/test/ --search-method hash
Found 2 duplicated files in 1 groups with same content which took 361.76 KiB:
Size - 361.76 KiB (370442) - 2 files
/run/shm/test/b.jpg
/run/shm/test/a.jpgCheck files by analyzing them as images:
$ linux_czkawka_cli image --directories /run/shm/test/
Found 1 images which have similar friends
/run/shm/test/a.jpg - 1000x1000 - 361.76 KiB - Very High
/run/shm/test/b.jpg - 1000x1000 - 361.76 KiB - Very High |
Tools like fdupes are ridiculous overkill when dealing with jpg or h264 compressed files. Two such files having the exact same filesize is already a pretty good indication of them being identical.
If, say, in addition to that, 16 equidistant chunks of 16 bytes are extracted and compared and they are the same as well that would be plenty of evidence for me to assume that they are identical. Is there something like that?
(By the way I am aware that filesize alone can be a rather unreliable indicator since there are options to compress to certain target sizes, like 1MB or 1 CD/DVD. If the same target size is used on many files, it is quite reasonable that some different files will have the exact same size.)
| Is there a tool or script that can very quickly find duplicates by only comparing filesize and a small fraction of the file contents? |
You can use awk:
$ awk 'FNR==NR{a[$1];next}($1 in a){print}' file2 file1
A0001 C001
A0024 C234
B1542 C231 | I have two files.
File 1:
A0001 C001
B0003 C896
A0024 C234
.
B1542 C231
.
upto 28412 such linesFile 2:
A0001
A0024
B1542
.
.
and 12000 such lines.I want to compare File 2 against File 1 and store the matching lines from File 1. I tried Perl and Bash but none seems to be working.
The latest thing I tried was something like this:
for (@q) # after storing contents of second file in an array
{
$line =`cat File1 | grep $_`; #directly calling File 1 from bash
print $line;
}but it fails.
| Compare two files for matching lines and store positive results [duplicate] |
If those file contents are called file1, file2 and file3 in order of appearance, then you can do it with the following one-liner:
# python3 -c "x=open('file1', mode='rb').read(); y=open('file2', mode='rb').read(); print(x in y or y in x)"
True
# python3 -c "x=open('file2', mode='rb').read(); y=open('file1', mode='rb').read(); print(x in y or y in x)"
True
# python3 -c "x=open('file1', mode='rb').read(); y=open('file3', mode='rb').read(); print(x in y or y in x)"
False |
I am trying to find a way to determine if a text file is a subset of another..
For example:
foo
baris a subset of
foo
bar
plutoWhile:
foo
plutoand
foo
barare not a subset of each other...
Is there a way to do this with a command?
This check must be a cross check, and it has to return:
file1 subset of file2 : True
file2 subset of file1 : True
otherwise : False | How to know if a text file is a subset of another |
This could be an approach:
diff <(nl file1) <(nl file2)With nl number the lines that diff recognizes the lines line by line.
|
I have two files that essentially contain a memory dumps in a hex format. At the moment I use diff to see if the files are different and where the differences are. However, this can be misleading when trying to determine the exact location (i.e. memory address) of the difference. Consider the following example showing the two files side-by-side.
file1: file2:0001 | 0001
ABCD | FFFF
1234 | ABCD
FFFF | 1234Now diff -u will show one insertion and one deletion, although 3 lines (memory locations) have changed between the two files:
0001
+FFFF
ABCD
1234
-FFFFIs there an easy way to compare the two files such that each line is only compared with the same line (in terms of line numbering) in the other file? So in this example it should report that the last 3 lines have changed, along with the changed lines from file1 and file2. The output doen't have to be diff-style, but it would be cool if it could be colored (at the moment I color the diff -u output using sed so that could easily be adapted).
| Compare two files strictly line-by-line, without insertions or deletions |
@deroberts answer is great, though I want to share some other information that I have found.
gzip -l -v
gzip-compressed files contain already a hash (not secure though, see this SO post):
$ echo something > foo
$ gzip foo
$ gzip -v -l foo.gz
method crc date time compressed uncompressed ratio uncompressed_name
defla 18b1f736 Feb 8 22:34 34 10 -20.0% fooOne can combine the CRC and uncompressed size to get a quick fingerprint:
gzip -v -l foo.gz | awk '{print $2, $7}'cmp
For checking whether two bytes are equal or not, use cmp file1 file2. Now, a gzipped file has some header with the data and footer (CRC plus original size) appended. The description of the gzip format shows that the header contains the time when the file was compressed and that the file name is a nul-terminated string that is appended after the 10-byte header.
So, assuming that the file name is constant and the same command (gzip "$name") is used, one can check whether two files are different by using cmp and skipping the first bytes including the time:
cmp -i 8 file1 file2Note: the assumption that the same compression options is important, otherwise the command will always report the file as different. This happens because the compression options are stored in the header and may affect the compressed data. cmp just looks at raw bytes and do not interpret it as gzip.
If you have filenames of the same length, then you could try to calculate the bytes to be skipped after reading the filename. When the filenames are of different size, you could run cmp after skipping bytes, like cmp <(cut -b9- file1) <(cut -b10- file2).
zcmp
This is definitely the best way to go, it first compresses data and starts comparing the bytes with cmp (really, this is what is done in the zcmp (zdiff) shellscript).
One note, do not be afraid of the following note in the manual page:When both files must be uncompressed before comparison, the second is uncompressed to /tmp. In all other cases, zdiff and zcmp use only a pipe.When you have a sufficiently new Bash, compression will not use a temporary file, just a pipe. Or, as the zdiff source says:
# Reject Solaris 8's buggy /bin/bash 2.03. |
I am trying to save space while doing a "dumb" backup by simply dumping data into a text file. My backup script is executed daily and looks like this:Create a directory named after the backup date.
Dump some data into a text file "$name".
If the file is valid, gzip it: gzip "$name". Otherwise, rm "$name".Now I want to add an additional step to remove a file if the same data was also available in the day before (and create symlink or hardlink).
At first I thought of using md5sum "$name", but this does not work because I also store the filename and creation date.
Does gzip have an option to compare two gzipped files and tell me whether they are equal or not? If gzip does not have such an option, is there another way to achieve my goal?
| How can I check if two gzipped files are equal? |
Supposing you have the size of file1 in the variable FILE1_SZ and your head implementation supports the (non-standard) -c option:
if head -c "$FILE1_SZ" file2 | cmp -s - file1; then
echo "file1 is a prefix of file2"
else
echo "file1 is not a prefix of file2"
fi |
I have two files with sizes 124665 and 124858 in bytes and want to check whether file1 is a prefix of file2 or not.
| How to check whether file1 is a prefix of file2? |
awk is a better tool for comparing columns of files. See, for example, the answer to: compare two columns of different files and print if it matches -- there are similar answers out there for printing lines for matching columns.
Since you want to print lines that don't match, we can create an awk command that prints the lines in file2 for which column 2 has not been seen in file1:
$ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' file1 file2
Another 193 stuff2
Another 783 stuff3As explained similarly by terdon in the above-mentioned question,NR==FNR : NR is the current input line number and FNR the current file's line number. The two will be equal only while the 1st file is being read.
c[$2]++; next : if this is the 1st file, save the 2nd field in the c array. Then, skip to the next line so that this is only applied on the 1st file.
c[$2] == 0 : the else block will only be executed if this is the second file so we check whether field 2 of this file has already been seen (c[$2]==0) and if it has been, we print the line. In awk, the default action is to print the line so if c[$2]==0 is true, the line will be printed.But you also want the lines from file1 for which column 2 doesn't match in file2. This you can get by simply exchanging their position in the same command:
$ awk 'NR==FNR{c[$2]++;next};c[$2] == 0' file2 file1
Something 456 item2
Something 768 item3So now you can generate the output you want, by using awk twice. Perhaps someone with more awk expertise can get it done in one pass.
You tagged your question with /ksh, so I'll assume you are using korn shell. In ksh you can define a function for your diff, say diffcol2, to make your job easier:
diffcol2()
{
awk 'NR==FNR{c[$2]++;next};c[$2] == 0' $2 $1
awk 'NR==FNR{c[$2]++;next};c[$2] == 0' $1 $2
}This has the behavior you desire:
$ diffcol2 file1 file2
Something 456 item2
Something 768 item3
Another 193 stuff2
Another 783 stuff3 |
Will it be possible to use diff on a specific columns in a file?
file1
Something 123 item1
Something 456 item2
Something 768 item3
Something 353 item4file2
Another 123 stuff1
Another 193 stuff2
Another 783 stuff3
Another 353 stuff4output(Expected)
Something 456 item2
Something 768 item3
Another 193 stuff2
Another 783 stuff3I want to diff the 2nd column of each file, then, the result will contain the diff-ed column but along with the whole line.
| Using Diff on a specific column in a file |
This can easily be done with diff. For example:
$ ls -l foo/
total 2132
-rwxr-xr-x 1 terdon terdon 1029624 Nov 18 13:13 bash
-rwxr-xr-x 1 terdon terdon 1029624 Nov 18 13:13 bash2
-rwxr-xr-x 1 terdon terdon 118280 Nov 18 13:13 ls$ ls -l bar/
total 1124
-rwxr-xr-x 1 terdon terdon 1029624 Nov 18 13:14 bash
-rwxr-xr-x 1 terdon terdon 118280 Nov 18 13:14 ls$ diff bar/ foo/
Only in foo/: bash2In the example above, the foo/ and bar/ directories contain binary files and bash2 is only in foo/.
So, you could run something simple like:
$ diff bar/ foo/ && echo "The directories' contents are identical"That will show you the different files, if any, or print "The directories' contents are identical" if they are. To compare subdirectories and any files they may contain as well, use diff -r. Combine it with -q to suppress the output for text files.
|
I'd like to compare directories with binary files. Actually, I'm not interested in what the actual differences between files are, but to know if there's a differ (and what files differ). Previously I used meld, but it's cannot compare binary files.
What such file comparison tool can do this?
NOTE: It doesn't matter if it's a graphical tool or is just has a command-line.
| How to compare directories with binary files |
#!/bin/bash
shopt -s dotglobfor file in "$1"/*; do [[ -f "$file" ]] && d1+=( "$(md5sum < "$file")" ); done
for file in "$2"/*; do [[ -f "$file" ]] && d2+=( "$(md5sum < "$file")" ); done [[ "$(sort <<< "${d1[*]}")" == "$(sort <<< "${d2[*]}")" ]] && echo "Same" || echo "Different"You can see it in action here:
$ mkdir 1 2
$ ./comparedirs 1 2
Same
$ cat > 1/1 <<< foo
$ cat > 2/1 <<< foo
$ ./comparedirs 1 2
Same
$ cat > 2/1 <<< bar
$ ./comparedirs 1 2
Different |
In Ubuntu, is there any to find duplicate folders in a directory (i. e., folders with the same content)? I think there are already some command-line tools available for finding duplicate files (such as fdupes), but I want to find duplicate folders instead. That is, find folders which match in terms of the contents of the files they contain (though the filenames and other metadata might differ).
| Find all folders in a directory with the same content |
One possible solution may be:
diff -s $FIRST_FILE $SECOND_FILE > /dev/null
if [ $? -eq 0 ]; then
echo "The files are identical"
fiNOTE: It changed the question text.
|
When I set the -s parameter, diff also print files, that are different.
diff -s $FIRST_FILE $SECOND_FILE | Silent result with two identical files in diff: how to show them? |
Compare the sorted files.
In bash (or ksh or zsh), with a process substitution:
diff <(sort File1.txt) <(sort File2.txt)In plain sh:
sort File1.txt >File1.txt.sorted
sort File1.txt >File2.txt.sorted
diff File1.txt.sorted File2.txt.sortedTo quickly see the differences between sorted files, comm can be useful: it shows directly the lines that are in one file but not the other.
comm -12 <(sort File1.txt) <(sort File2.txt) >common-lines.txt
comm -23 <(sort File1.txt) <(sort File2.txt) >only-in-file-1.txt
comm -13 <(sort File1.txt) <(sort File2.txt) >only-in-file-2.txtIf a line is repeated in the same file, the commands above insist on the two files having the same number of repetitions. If you want to treat
foo
bar
fooas identical to
bar
foothen remove duplicates when sorting: use sort -u instead of sort.
If you save the output of sort on one file and use it later when the other file is available, note that the two files must be sorted in the same locale. If you do this, you should probably sort in byte order:
LC_ALL=C sort File1.txt >File1.txt.sorted |
Suppose that I have the two files with the following content:
$ cat File1.txt
Apple
orange
watermelon
avocado
lime$ cat File2.txt
orange
Apple
lime
watermelon
avocadoBasically, there is no difference, as both have same values.
I am using the diff command:
diff File1.txt File2.txtand it shows files are different as values are misplaced, In my case I require it should have not showed difference. What are the other ways to achieve this, any suggestions are welcome.
| I want to compare values of two files, but not based on position or sequence |
With bash, zsh and some implementations of ksh:
comm -12 <(tr -s '[:space:]' '[\n*]' < a.txt | sort -u) \
<(tr -s '[:space:]' '[\n*]' < b.txt | sort -u)There, word is a sequence of non-spacing character (beware that with GNU tr, that doesn't work with multi-byte spacing characters).
comm finds the common lines between two sorted files. Without options, it prints 3 columns: the lines only in file1, the lines only in file2, and the lines common to both. You add -1, -2, -3 to remove the corresponding columns from the output. So comm -12 only leaves the third column (the common lines).
tr -s '[:space:]' '[\n*]' transliterate any sequence of characters of class space into newlines, to put every word on its own line.
sort -u sorts and removes duplicates from tr's output.
Process substitution <(...) pipes the outputs of the tr|sort commands to comm.With zsh:
w1=($(<a.txt)) w2=($(<b.txt))
print -rl -- ${(u)${w1:*w2}}There, word is a sequence of characters other than space, tab, nul and newline (with the default value of $IFS).
$(<a.txt) is an optimised version of $(cat a.txt) where zsh reads the content of the file by itself without invoking cat, since it's not quoted, it undergoes word splitting (but not globbing contrary to other shells).
So w1 and w2 are arrays containing all the words in a.txt and b.txt.
${w1:*w2} is a zsh operator that gives the intersection of two arrays (the elements common to both). (u) is a parameter expansion flag that retains unique elements (removes duplicates).
print -rl prints each argument one per line.
|
Suppose I have two files a.txt and b.txt.
I want to find all the words in a.txt which appear in b.txt.
Is there a specific command to do that?
| finding all the words in a text file appearing in another text file |
Determine the size of the image, for example with \ls -l my.img (not ls -lh, that would give you an approximate size; \ls protects against an alias like ls='ls -h') or with stat -c %s my.img.
If you want to check the copy against the original just this once, then just compare the files. Using hashes is useless for a one-time comparison, it would only make things slower and require more commands. The command cmp compares binary files. You need to pass it the image file and the corresponding part of the SD card. Use head to extract the beginning of the SD card.
</dev/sdc head -c "$(stat -c %s my.img)" | cmp - my.imgIf you want to perform many comparisons, then hashes are useful, because you only need to read each instance once, to calculate its hash. Any hash will do since you're worried about data corruption. If you needed to check that a file hasn't been modified for security reasons, then cksum and md5sum would not be suitable, you should use sha256sum or sha512sum instead.
md5sum <my.img >my.img.md5sum
</dev/sdc head -c "$(stat -c %s my.img)" | md5sum >sd-copy.md5sum
cmp my.img.md5sum sd-copy.md5sumNote the input redirection in the first command; this ensures that the checksum file doesn't contain file names, so you can compare the checksum files. If you have a checksum file and a copy to verify, you can do the check directly with
</dev/sdc head -c "$(stat -c %s my.img)" | md5sum -c my.img.md5sumOh, and don't use dd, it's slow (or at best not faster) and doesn't detect copy errors.
|
I have a ~1GB image that I'm writing to a 8GB SD card via the dd tool. I'd like to verify that it was written without corruption by reading it back and comparing its hash with original one.
Obviously, when I read it back via dd the size of resulting image matches size of my SD card, therefore checking hashes are useless.
I believe that I should somehow interpret the output of writing invocation to configure the skip / count parameters to read it back properly.
Command that I used to write my image:
> sudo dd if=my.img of=/dev/sdc bs=1M
8+50581 records in
8+50581 records out
3947888640 bytes (3.9 GB) copied, 108.701 s, 36.3 MB/sCommand that I used to read my image:
> sudo dd if=/dev/sdc of=same_as_my.img
15523840+0 records in
15523840+0 records out
7948206080 bytes (7.9 GB) copied, 285.175 s, 27.9 MB/s | How one can re-read image with dd so it will match one you just wrote? |
The simple answer is: "compare the sorted version of both files".
In bash:
diff <(sort file1) <(sort file2)Obviously, this does not mean the two files have the same semantic as source files of a programming language (supposing are both syntactically correct).
|
Consider for example a source code file, where the functions are drastically shuffled around. Is there is a command to check if the reordering of lines is the only change?
(that means no lines are added, removed or changed)
| How to determine if file is just a permutation of another one? |
start cmd:> awk 'FNR == NR { oldfile[$0]=1; };
FNR != NR { if(oldfile[$0]==0) print; }' file1 file2
delta
omega
rho
phi | I need to compare two txt files. Every line of both txt files contain entries. One entry per line. The new file contains entries the old one lacks. I have tried to use diff and vimdiff but these don't work because the lines may be in different order.
For example:
OLD FILE
alpha
beta
gamaNEW FILE
delta
omega
beta
alpha
gama
rho
phidiff and vimdiff compares line 1 with line 1, line 2 with line 2, etc and even if I sort both files the comparison will not succeed because I can have new items between the sorted versions, like "alpha, beta, rho" versus "alpha, beta, gama, rho".
How do I get a list of entries that the new file have that the old one does not?
| Finding new lines in one file compared with another [duplicate] |
This is one of those rare occasions when I'd probably use getline due to the size of your input files so we only save a handful of lines in memory at a time instead of >10G:
$ cat tst.awk
BEGIN {
OFS = "\t"
print "Group_Source:Location", "df1.index", "df2.index"
}
NR != FNR { exit }
{ srcLoc = $3 ":" $4 }
srcLoc != prevSrcLoc {
if ( NR > 1 ) {
diff()
}
prevSrcLoc = srcLoc
}
{
file1[$1,$2] = FNR - 1
if ( (getline < ARGV[2]) > 0 ) {
file2[$1,$2] = FNR - 1
}
}
END { diff() }function diff( idPos) {
for ( idPos in file1 ) {
if ( file1[idPos] != file2[idPos] ) {
print prevSrcLoc, file1[idPos], file2[idPos]
}
}
delete file1
delete file2
}$ awk -f tst.awk file1.tsv file2.tsv
Group_Source:Location df1.index df2.index
ch1:16 6 4
ch1:16 4 6
ch1:18 10 9
ch1:18 9 10
ch2:53 17 14
ch2:53 15 17
ch2:53 14 15For more info on getline, please read http://awk.freeshell.org/AllAboutGetline.
The above would work even if an Identifier and/or Position was repeated within the input since it's comparing all 4 fields between the 2 files. It does assume that the Source and Location values are in the same order between the 2 files as shown in the sample input.
|
I have two large tab-delimited files (>10GB) and I know that when they're sorted, they're identical in content.
However, I'm interested in the order of rows and the index of the swapped ones when they share the same "key" (key here being defined as rows grouped based on Source and Location columns).
In other words, rows between these two files should be only compared against each other when they come from the same group (i.e. when they share the same Source and Location).
So for example, in the example below, rows 4, 5, 6 from file1.tsv should be compared against 4, 5, 6 from file2.tsv
Note: files are normal TSV. Additional spaces are only added here to make columns center- and right-aligned for better visibility. These spaces are not part of the original filesfile1.tsv Identifier Position Source Location
AY1:2301 87 ch1 14
BC1U:4010 105 ch1 14
AC44:1230 90 ch1 15
AJC:93410 83 ch1 16
ABYY:0001 101 ch1 16
ABC:01 42 ch1 16
HH:A9CX 413 ch1 17
LK:9310 2 ch1 17
JFNE:3410 132 ch1 18
MKASDL:11 14 ch1 18
MKDFA:9401 18 ch1 18
MKASDL1:011 184 ch2 50
LKOC:AMC02 18 ch2 50
POI:1100 900 ch2 53
MCJE:09HA 11 ch2 53
ABYCI:1123 15 ch2 53
MNKA:410 1 ch2 53file2.tsv Identifier Position Source Location
AY1:2301 87 ch1 14
BC1U:4010 105 ch1 14
AC44:1230 90 ch1 15
ABC:01 42 ch1 16
ABYY:0001 101 ch1 16
AJC:93410 83 ch1 16
HH:A9CX 413 ch1 17
LK:9310 2 ch1 17
MKASDL:11 14 ch1 18
JFNE:3410 132 ch1 18
MKDFA:9401 18 ch1 18
MKASDL1:011 184 ch2 50
LKOC:AMC02 18 ch2 50
MNKA:410 1 ch2 53
POI:1100 900 ch2 53
ABYCI:1123 15 ch2 53
MCJE:09HA 11 ch2 53I want to do something similar to a "diff" but at the 'group' level (where rows are only compared when they share the same Source and Location)
I want to extract the original "row numbers" when the order of rows are 'swapped' within the same "Source/Location" "group" (or key).
The whole row should match in terms of content.
But I have no idea how to go about this. I can only think of writing a for loop which would be extremely inefficient when my original dataset has millions of rows.
Expected result:
Group_Source:Location df1.index df2.indexch1:16 4 6
ch1:16 6 4
ch1:18 9 10
ch1:18 10 9
ch2:53 14 15
ch2:53 15 17
ch2:53 17 14Assumptions:Both dataframes have the same number of rows
Both dataframes are identical (only order of rows are swapped, so if both are sorted by Source, then Location and then Position and then Identifier, then they will be exactly identical)
'Swapped' rows always match exactly in terms of content in all columns | Extract the indexes of rows that are swapped in order between two files |
You can use grep for this:
$ grep -vwf <(cut -d, -f1 file1) file2
test4Explanationgrep options:
-v, --invert-match
Invert the sense of matching, to select non-matching lines.
-w, --word-regexp
Select only those lines containing matches that form
whole words.
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. So, combined, grep -vwf patternFile inputFile means "find those lines from patternFile which are never present as whole words in inputFile".
<(command): this is called process substitution and, in the shells that support it (e.g. bash) it will essentially act like a file. This enables us to use the output of the cut command as a "file" for grep's -f option.
cut -d, -f1 file1: print only the 1st, comma-separated field of file1.Note that you might want to use -x (match entire line) instead of just -w if your data are really as you show:
-x, --line-regexp
Select only those matches that exactly match the whole line.So:
$ grep -vxf <(cut -d, -f1 file1) file2
test4Also, if your file1 can contain any regular expression characters (., *, ? etc.) you might want to use -F as well:
-F, --fixed-strings
Interpret PATTERNS as fixed strings, not regular expressions.So:
$ grep -Fvxf <(cut -d, -f1 file1) file2
test4 |
File# 1:
test1,1
test2,2
test3File# 2:
test2
test1
test4Desired Output:
test4 | Compare 2 files based on the first column and print the not matched |
You could pipe to:
expand -t "$((${COLUMNS:-$(tput cols)} / 2))"Or for the angle brackets:
awk -v cols="${COLUMNS:-$(tput cols)}" '
BEGIN {width = cols/2-1; space = sprintf("%*s", width, "")}
/^\t/ {print space ">", substr($0, 2); next}
{printf "%-*s<\n", width, $0}'If your tput doesn't output the number of columns, you could try parsing the output of stty size or stty -a. Or use zsh -c 'echo $COLUMNS' (also works with mksh). There's no standard/portable way to get that information.
If the input files contain multi-byte or double-width characters, YMMV. Depending on the expand/awk implementation alignment may be off.
That also assumes that the input files have no line that start with a Tab character. If that can't be guaranteed, the GNU implementation of comm has a --output-delimiter which you could use to specify a unique string. Or you could implement the comm functionality in awk which shouldn't be too complicated.
|
I am looking from something which gives me an output of comm -3 on two sorted outputs (line-by-line comparison, only additional/missing lines from either side) but which looks more like the output from diff -y, e.g. in that it uses the whole width.
file1:
bar/a
bar/feugiat
bar/libero
bar/mauris
bar/scelerisque
bar/urna
foo/blandit
foo/elementum
foo/feugiat
foo/laoreet
foo/luctus
foo/non
foo/pellentesque
foo/pulvinar
foo/rutrum
foo/sed
foo/ut
foo/vivamusfile2:
bar/a
bar/molestie
bar/quam
bar/risus
bar/tristique
foo/blandit
foo/elementum
foo/feugiat
foo/ligula
foo/massa
foo/mauris
foo/metus
foo/pellentesque
foo/pulvinar
foo/utOutput from comm -3 file1 file2:
bar/feugiat
bar/libero
bar/mauris
bar/molestie
bar/quam
bar/risus
bar/scelerisque
bar/tristique
bar/urna
foo/laoreet
foo/ligula
foo/luctus
foo/massa
foo/mauris
foo/metus
foo/non
foo/rutrum
foo/sed
foo/vivamusOutput from diff -y --suppress-common-lines file1 file2 (GNU), it depends on the screen width:
bar/feugiat | bar/molestie
bar/libero | bar/quam
bar/mauris | bar/risus
bar/scelerisque | bar/tristique
bar/urna <
foo/laoreet | foo/ligula
foo/luctus | foo/massa
foo/non | foo/mauris
> foo/metus
foo/rutrum / foo/ut
foo/sed <
foo/ut <
foo/vivamus <Possible output I would wish for:
bar/feugiat <
bar/libero <
bar/mauris <
> bar/molestie
> bar/quam
> bar/risus
bar/scelerisque <
> bar/tristique
bar/urna <
foo/laoreet <
> foo/ligula
foo/luctus <
> foo/massa
> foo/mauris
> foo/metus
foo/non <
foo/rutrum <
foo/sed <
foo/vivamus <Without the arrows would be OK as well, just the screen width should be used better:
bar/feugiat
bar/libero
bar/mauris
bar/molestie
bar/quam
bar/risus
bar/scelerisque
bar/tristique
bar/urna
foo/laoreet
foo/ligula
foo/luctus
foo/massa
foo/mauris
foo/metus
foo/non
foo/rutrum
foo/sed
foo/vivamus | Naive line-by-line comparison like "comm -3" but looking like "diff -y" |
Using cut:
diff <(cut -c 20- file1) <(cut -c 20- file2)Note: with GNU cut the -c character option actually works on bytes not characters, but this should be fine as long as your output starts with date/time stamps and not special characters.
|
Can I compare two text files skipping N symbols from start of the each line?
For example file1:
2018-05-31 12:00:00 This is the first line of text.
2018-05-31 12:00:00 This is the second line of text.
2018-05-31 12:00:00 This is the third line of text.
2018-05-31 12:00:00 This is the forth line of text.
2018-05-31 12:00:00 This is the fifth line of text.and file2:
2018-05-31 12:00:01 This is the first line of text.
2018-05-31 12:00:02 This is the second line of text.
2018-05-31 12:00:03 This is the third line of text.
2018-05-31 12:00:04 This is the forth line of text.
2018-05-31 12:00:05 This is the fifth line of text.If I compare two files line by line - they are different because of the seconds in time stamp.
But if I skip first 19 symbols from the start of each line in both files (date and time) - these files are identical. How to do that using shell command (script)?
Thank you very much in advance.
| Compare text files skipping N symbols from each line |
The GNU version of cmp (which you are using) prints the differing bytes when given the -b option. If no printable representation of the byte can be shown, cmp will display [...] control bytes as a ^ followed by a letter of the alphabet and precede bytes that have the high bit set with M- (which stands for "meta").(quote from the cmp manual on a GNU system).
154 in the output refers to the letter l, and 151 refers to the letter i (also visible in the output). These are octal ASCII codes (see man ascii) for the first bytes in each file that differs between the files.
|
$ cmp -b file1 file2
file1 file2 differ: 12 byte, line 2 is 154 l 151 iin this response what do '154' and '151' refer to?
| `cmp -b file1 file2` responses: "file1 file2 differ: 12 byte, line 2 is 154 l 151 i", what is '154' and '151' in reference to? |
It doesn't seem to be able to handle the -y switch which does the side-by-side style of diff, but you can use the unified diff (-u). You can't mix these 2 styles so it's either -y or -u. So doing this worked for me:
$ diff -EwbBsu /directory/one /directory/two | kompare -o -This will not show the entire file with the matches, just the line that was different, with 3 lines of context, by default. If you want more context you can provide -u a argument of a number (u 10) for example.
$ diff -EwbBsU 10 /directory/one /directory/two | kompare -o - |
I want to quickly compare files in two different directories to see if the files are the same (same content). I want to see the results in Kompare (I'm on KDE - Kubuntu 12.04).
Here's my diff command:
diff -EwbBsy /directory/one /directory/two(That command would suit me even better if it ignored any files in /directory/one that are not already present in /directory/two, but I couldn't figure out how to achieve that.)
To use Kompare, I do this:
diff -EwbBsy /directory/one /directory/two | kompare -o -However, that gives the following error:Error: Could not parse diff output.I also tried:
diff -Ewbus /directory/one /directory/two | kompare -o -and just
diff /directory/one /directory/two | kompare -o -and a few other variations without success.
What am I doing wrong? Thanks.
| How to pipe diff into Kompare? |
Do you have meld installed? If so, the "Differences..." button is provided as part of meld's integration. To change the application invoked would apparently require uninstallation of meld (in which case the "Differences..." button will no longer be present) and installation of a custom application designed to similarly integrate with caja.
Bit of background...
The "Differences..." button wasn't present in my caja install. I ran across a comment at https://superuser.com/questions/1436673/vimdiff-show-differences-with-only-parent-rows#comment2168027_1436673 indicating that meld integrates with caja. On installing meld and restarting caja, the "Differences..." button was there and invoked meld.
|
How can I set custom application which is being invoked when I press "Differences" button in "File Conflict" dialog?I did not find corresponding option in "File Management Preferences".
I did not find it using "dconf Editor" in org.mate.caja.*.
I did not find it in files located at /usr/share/caja and ~/.config/caja.
Where is this option being stored at?
| Set custom application for file comparison in caja |
Using a shell with process substitutions (<(...)), e.g. bash or zsh:
diff <( head -n 20 file1 ) <( head -n 20 file2 )This run head -n 20 on each file to get the first 20 lines of each, in two separate process substitutions. Each process substitution will be expanded to the pathname of a file where the output of the command within may be read from (these files are temporary and are removed later).
The diff utility is then called to compare these two sets of data.
Without a process substitution:
head -n 20 file1 >file1.short
head -n 20 file2 | diff file1.short -
rm -f file1.shortThis creates a separate file from the 20 first lines of one file, and uses that with diff while the 20 first lines of the other files are read from standard input.
You may want to use -c or -u or some other option with diff in the commands above to get the diff format of your choice (see the diff manual).If the files are compressed, then you will have to uncompress the data:
diff <( gzip -d -c <file1 | head -n 20 ) <( gzip -d -c <file2 | head -n 20 )or, without process substitutions:
gzip -d -c <file1 | head -n 20 >file1.short
gzip -d -c <file2 | head -n 20 | diff file1.short -
rm -f file1.short |
What is an easy way I can compare the first 20 lines (or n lines) of two files?
I had set up an automated pg_dump, but it turns out the dumps being created are corrupt and now won't restore.
I still have a good dump file from a year ago, and I want to compare the first 20 lines between the two files.
What's an easy way of doing this?
I'm on Manjaro Linux.
| Compare the first 20 lines of two files |
Your sort solution may be a bit faster if you sort the files separately, then
use comm to find the non-common lines:
sort a.txt -o a.txt
sort b.txt -o b.txt
comm -3 a.txt b.txt | sed 's/^\t//'Alternatively, if one of your data files is not too large, you can read it all into an associative array then compare the other file line by line. Eg with awk:
awk '
ARGIND==1 { item[$0] = 1; next }
ARGIND==2 { if(!item[$0])print; else item[$0] = 2 }
END { for(i in item)if(item[i]==1)print i }
' a.txt b.txtIn the above ARGIND counts the files arguments.
The first line saves file 1 lines in array item. The next line sees if the current line from file 2 is in this array. If not it is printed, else we note that this item was seen in both files. Finally, we print the items that were not seen in both files.
If one of your files is much smaller than the other, it is best to put it first in the args so the item array stays small:
if [ $(wc -l <a.txt) -lt $(wc -l <b.txt) ]
then args="a.txt b.txt"
else args="b.txt a.txt"
fi
awk '
ARGIND==1 { item[$0] = 1; next }
ARGIND==2 { if(!item[$0])print; else item[$0] = 2 }
END { for(i in item)if(item[i]==1)print i }
' $args |
I have a command that produces a lists of strings followed by newlines, a, and a file containing a list of strings followed by newlines, b.txt. I need a command that calculates the symmetric difference of the output of a and contents of b.txt. Ideally this command should operate in a pipeline, as a is potentially very slow.
Venn diagram if you like those (Credits to Wikipedia):For those more example oriented:
a outputs
apple
carb.txt
banana
car
dogThen the result should be
apple
banana
dog | Symmetric Difference Pipe? |
diff -qrN is about as fast as it gets to compare two directory trees. The -q option makes it quit early when files differ. Since you expect the files to be identical most of the time, it doesn't matter all that much: the comparison tool has to read and compare the whole files anyway.
The only improvement you can make on diff is to avoid checking out from both repositories. Getting git to do the job may be faster then.
|
I am mirroring a Subversion repository tag with svn2git and I want to be sure that when I checkout particular revisions, those I obtain from the git mirror match those from Subversion. My main problem is that subversion tags can be updated, and I need to ensure that checking out the matching tag in the git mirror, matches the equivalent one in the Subversion branch.
Are there some tools that can make those checks efficiently? The source is quite a lot with many small files. There are quite a few answers here on the subject involving diff, but I wonder if there are more optimized tools for the job.
| What optimized tools are available for comparing directory contents? |
The following rsync command executed on the local machine lists the files that exist on the remote host but not the local host.
rsync -av --dry-run --delete somedir/ user@remote:~/somedir/The --dry-run switch only lists the files, without actually doing something, the --delete switch in combination with -v (verbose) lists the files that would be deleted because they exist on the remote host, but not the local host, which is want you want.
|
I have a folder.
I have one copy of this folder locally and one on a server. I edited my local folder as I wanted and then rsync it to the server.
Is there any way of comparing those two copies, local and remote, and get back a list of files that are on the remote one and not the local one?
| Get files on remote copy but not local |
The -q option to diff makes it only list the names of files with differences (or missing from one of the directories):
diff -q folder1 folder2 |
I have two folders with 200 txt files each, all files named like file1.txt, file2.txt, file3.txt, etc., on both folders.
Is there a way to use one command to compare file1 in both folders, file2 in both folders, etc., and list if they are the same or not? I just want to know which files are the same or not, not the differences.
| Comparing a bunch of files on different folders |
The traditional method:
diff -r dir1 dir2 That gives you a file-by-file difference, which can be kind of wordy. If you have Gnu diff,
you can try:
diff -r --brief dir1 dir2 |
Suppose I have same version of linux kernel but I changed some driver lines. Is there any way to compare these kernels and list the results. The result would be helpful to go back if I changed a lot in original drivers.
| Compare two similar directories and list differences between files |
To rip an audio CD you should really use a tool such as cdparanoia.
This will handle jitter and error correction, will retry as necessary, and try to create a "perfect" datastream.
Typically you would use this to create the wav files, which can then be converted to FLAC format as necessary.
There are other tools, including some front end GUIs, that can talk to external databases like CDDB to automatically work out the album and track names, but for raw audio ripping cdparanoia is hard to beat.
|
I have a hard time using Linux' built-in tools to rip an audio cd (sound juicer, rhythmbox). The reason likely being my drive, which vibrates a lot and cannot read the disk continuously. Playing the disk in any audio player results in short pauses and stutter-y playback. Ripping the CD results in noticeable artefacts. I would have thought there's some validation going on in those tools, say for example a buffer that's save to convert, but apparently that's not the case and data is converted as comes from the drive. This phenomenon occurred on several cds to different extent.
To work around the drive, I copied the .wav files over to disk (using thunar file browser). To double check if at least that worked, I found the location of the CD files, cd'd into that directory and used diff to compare the first file to the copied on in my music directory:
/run/user/1000/gvfs/cdda:host=sr0$ diff Track\ 1.wav ~/Music/Artist/Album/Track\ 1.wav
Binary files Track 1.wav and /home/me/Music/Artist/Album/Track 1.wav differOk, so they are different. Why is this the case? How can I copy the file correctly without getting a different one? Or is the problem with my verification? Is diff a valid way to to compare the two files?Ideally, I'd love to just rip a CD to flac files renamed to match the track titles like sound juicer would do, but more reliably.
| How can I copy a .wav file from an audio cd and verify it? |
Something like this:
vimdiff <(find /home/masi -printf "%P %u:%g %m\n" | sort) <(find /home/masi_backup -printf "%P %u:%g %m\n" | sort)(this gives names without the leading /home/masi or /home/masi_backup, owning user and group, and permissions — the latter weren't mentioned in the question but seem useful, drop %m if you don't want them).
|
I have two home folders: /home/masi and /home/masi_backup and I would like to find the differences between files of the two directories.
Pseudocode
vimdiff <`ls -la /home/masi` <`ls -la /home/masi_backup` How can you compare the differences of ownerships between the two directories?
| Find differences of ownerships between two home folders? |
You can do something like this:
for file in /path/to/dirA/*; do
fileName=${file##*/}
diff -q <(sort "$file") <(sort /path/to/dirB/"$fileName") &&
rm /path/to/dirB/"$fileName"
doneThat will iterate over all files in dirA, saving each as $file. Note that $file will include the path, so it will be /path/to/dirA/file1 and not just file1. This is why we need to get the file name, which we do by removing everything before the last slash (fileName=${file##*/}). Then, we compare the file silently to the file of the same name in directory B and, if they are identical, so if the diff exits successfully, we remove the file from directory B. The && means "run the next command only if this one is successful" so the rm will only run when the files are identical.
To make it recursive, assuming you are using bash, use:
shopt -s globstar
cd /path/to/dirA/
for file in **; do
fileName=${file#*/}
echo diff -q <(sort "$file") <(sort /path/to/dirB/"$fileName") &&
rm /path/to/dirB/"$fileName"
doneOr, a little more sophisticated, skipping directories and non-existent files:
shopt -s globstar
cd /path/to/dirA/
for file in **; do
if [ -d "$file" ]; then
echo "$file is a directory, skipping.";
else
fileName=${file#*/}
if [[ -e /path/to/dirB/"$fileName" ]]; then
echo diff -q <(sort "$file") <(sort /path/to/dirB/"$fileName") &&
rm /path/to/dirB/"$fileName"
fi
fi
done |
I'm completely new to bash. I have a requirement that needs to do the following: Iterate through a directory A's and directory B's folders with the same name
Find two files that have the same name and compare them (im using diff <(file1) <(sort file2) to compare the files)
If there no differences delete the file in directory A
If there are differences ignore and process the next matching pair of files
check the next folder from each directory and repeat the process until all matching folders have been checked.So for example in Directory A I have folderA that has 2 files (file1 and file2)
In directory B I have folderA that has 3 files (file1 and file2 and file3)File1 in both directories are the same - Delete from directory A
file2 there are differences - keep in both directories
file3 do nothing - keep in directory BThe files that I'm using are xml files. The ordering of tags sometimes differ in the files but the content would be exactly the same, unless there are additions which I'd want to keep the file. I don't necessarily care if the ordering of the tags are different I just want to make sure that all the content are the same or different. Hope that provides more clarity.
Any help would be much appreciated.
UPDATE:
So I've managed to get this far but when running the script the out put in the console is blank. It should list the files that have been found to be the same and remove them, where am I going wrong? declare -a my_array
shopt -s globstar
cd /mnt/c/filediff/validation/applications/ for file in **; do
if [ -d "$file" ]; then
echo "$file is a directory, skipping.";
else
fileName=${file#*/}
if [[ -e /mnt/c/filediff/package/"$fileName" ]]; then
echo diff -q <(sort "$file") <(sort /mnt/c/filediff/package/"$fileName") &&
my_array=("${my_array[@]}" "$fileName")
#rm /mnt/c/filediff/package/"$fileName"
fi
fi
done
echo -e '\nRemoved the following files -----------------------------------'
for item in "${my_array[@]}"
do
echo "ITEM: *** $item ***"
done | Delete file when there is no difference |
Python 3.x solution:
diff_marked.py script:
import sysfile1_name = sys.argv[1]
file2_name = sys.argv[2]with open(file1_name, 'r') as f1, open(file2_name, 'r') as f2:
f1_lines = f1.readlines() # list of lines of File1
f2_lines = f2.readlines() # list of lines of File2 for k,l in enumerate(f1_lines):
f1_fields = l.strip().split('|') # splitting a line into fields by separator '|'
if k < len(f2_lines) and f2_lines[k]:
has_diff = False
f2_fields = f2_lines[k].strip().split('|')
for i,f in enumerate(f1_fields):
if f != f2_fields[i]: # comparing respective lines 'field-by-field' between two files
f1_fields[i] = '**' + f + '**' # wrapping differing fields
f2_fields[i] = '**' + f2_fields[i] + '**'
has_diff = True if has_diff:
print(f1.name) # print file name
print('|'.join(f1_fields))
print(f2.name)
print('|'.join(f2_fields))Usage: (you may have another python version, the current case has been tested on python 3.5)
python3.5 diff_marked.py File1 File2 > diff_outputdiff_output contents:
File1
1|piyush|bangalore|**dev**
File2
1|piyush|bangalore|**QA**
File1
3|rohit|**delhi**|**QA**
File2
3|rohit|**bangalore**|**dev** |
I got a requirement where I need to compare two files wrt to each columns and write the corresponding difference in another file along with some identification showing mismatched columns. Pointing out the mismatched columns is my main problem statement. For example we have files like:
File 11|piyush|bangalore|dev
1|piyush|bangalore|QA
2|pankaj|bangalore|dev
3|rohit|delhi|QAFile 21|piyush|bangalore|QA
1|piyush|bangalore|QA
2|pankaj|bangalore|dev
3|rohit|bangalore|devThe expected output file looks somewhat like.
File 1
1|piyush|bangalore|**dev**
File 2
1|piyush|bangalore|**QA**
File 1
3|rohit|**delhi**|**QA**
File 2
3|rohit|**bangalore**|**dev**I want to achieve something like this where i can see the mismatched columns as well along with mismatched rows. I have tried
diff File1 File2 > Diff_File
But this is giving me only the mismatched records or rows. I am not getting any way to point out the mismatched columns as well. Please help me out if its possible to do is using shell script or awk command as i am very new to this. Thanks in advance.
| Comparing two files and writing mismatched rows along with mismatched columns. Pointing out the mismatched columns is my main problem statement |
Solution in bash or a similar shell with process substitution using the <(...) form:
comm -1 -2 <(sort list1) <(sort list2)Should you have duplicate entries in list2 then add the -u option to the sort call.
| I have a list from an inventory and another list from management. I'm trying to find the IP's that are similar between both files then output that is similar into another file:
I tried using diff but, the output did not made sense.
diff -buy list1 list2then I tried to use egrep using IP's from list 1but, I think I used the wrong syntax.
egrep -o `192.168.*|192.1.69` list2not sure what to use correctly
like:
list 1 maybe have:
192.168.1.1
192.168.1.2
192.168.1.3
192.168.2.1and I want to try to find this IPs in list2
| Compare different IPs in two files? [closed] |
With those files, you could use grep like:
grep -wf file2 file1though you'll need to dos2unix file2 first since it has \r characters at the end.
This will match whole words with -w and read the patterns from the file with -f. This would actually match the patterns anywhere in the line, but with the sample input you gave us, it should get the job done.
As for your python code, you might want to consider spliting the line once and using that list many times instead of re-splitting it each time you want part of it
|
I have two files:
aaaa 11 0.4 12 0.2
aaab 40 0.1 99 0.2 69 0.3
aaac 222 0.5 21 0.3
aaad 2 0.1
aaae 33 0.3
....and
aaaa
aaac
aaae
....I need to compare the first column of first file with second file and if a element is present in second file, write each line of the first file to a separate file. I have a script that does that in python but its extremely inefficient. Is it possible to do it from a terminal?
EDIT:
python script:
LABEL_FILE would be the first example and other 'file' - list is present_images-list of files in a folder.
f = open(LABEL_FILE, 'r')
present_images = iter(os.listdir(os.path.join(IMAGES_PATH, dataset))) templab = f.readlines()
num_info = len(templab)
image_ids = []
labels = []
labels_ind = []
for line in templab:
if len(line[:-1].split(' ')) != 1:
if (line[:-1].split(' ')[0] in present_images):
image_ids.append(os.path.join(IMAGES_PATH, dataset, line[:-1].split(' ')[0]))
line = line[:-1].split(' ')[1:]
labels_ind.append([int(i) for i in line[::2]])
labels.append([float(j) for j in line[1::2]]) | Comparing single column of a file with another |
Here is a quick and dirty shell "one-liner" with example output:
$ join -j2 <(cd sub1; wc -l *) <(cd sub2; wc -l *) | awk '$2!=$3'
file3.csv 5 1
file4.csv 1 5
total 11 17The total line is an artifact from the output of wc. It can be removed with another filter:
$ join -j2 <(cd sub1; wc -l *) <(cd sub2; wc -l *) | awk '$2!=$3' | head -n-1
file3.csv 5 1
file4.csv 1 5Explanation:
join will join two files based on a common column. In this case we join based on the second column (-j2). In the output of wc the second column is the filename. This will only print files which are common in both directories.
The wc invocations are done in process substitutions with working directory changed to sub1 or sub2 so the filenames are printed without directory name. This is so that join can find the common files.
The awk command compares the value in the second and third column and only prints the line if the values differ. This will filter out files with the same line count.
head -n-1 will print all lines but not the last line. This will filter out the last total line from wc.
|
I have a directory sub1 with the following files:
$ wc -l *5 file1.csv
5 file3.csv
1 file4.csvIn sub2, I have the following:
$ wc -l *
5 file1.csv
5 file2.csv
1 file3.csv
5 file4.csv
1 file5.csvIn the first directory, I might have files with added lines, which then go to the second dir. In this example, I might need to update file3 in sub2.
How do I get a list of the files with differences? I did some tests with diff and grep, but it doesn't work because the directories have different files (and hence the lines are different):
~/dir1/$ wc -l >> wc.luis~/dir1/$ wc -l * | awk '{ gsub(/\/home.*dir1\//,""); print $0 }'
| diff --side-by-side wc.luis -
| grep \|Ideally, I would get a list like this:
5 file3.csv | 1 file3.csv
1 file4.csv | 5 file4.csvAny help is appreciated!Notes: I cannot check on the date, because all files were updated, with or without changes.
Sometimes the newest files lack some lines, for which reason I cannot just take the bigger one. | How to find files with same name but different line count in two directories? |
Not knowing how big the chunk (A) is ... have you considered grep'ing for its content using -a?
|
I have a file (call it A, for reference) that may be a fragment of some other extant file on my system. I can't use cmp because I don't know how many bytes may be missing from the start of A (or, at least, I can't use it without brute forcing through the -i flag). Is there a way for me to discover whether A is already existent on my system (using GNU tools, or any other linux program)? Or will I have to botch together a c++ program to do the job? Note: efficiency is desirable since the files that A has to be compared with may be numerous.
| binary compare two files, failing only if first never matches any part of the second |
For your updated input, based strictly on 4-line records, you can use modulo arithmetic to maintain arrays of the current records, and check the 3rd lines for a match every 4th line:
$ awk '
{a[FNR%4] = $0; getline b[FNR%4] < "fileB"}
!(FNR%4) && b[3] != a[3] {
for(i=0;i<4;i++) print b[i%4]
}
' fileA
record2 line1=header
record2 line2
record2 line3 id GHI <= this is different
record2 line4(note that one really should check the return value of the getline command, and do something sensible if it fails).For your originally-posted input, you could have used paragraph mode:
$ awk -vRS= -F'\n' '{A3 = $3}; getline "fileB" > -1 && $3 != A3' fileA
record2 line1=header
record2 line2
record2 line3 id DEF <= this is different
record2 line4The empty RS causes whole blank-line separated records to be read, for both normal processing (input from fileA) and for getline (input from fileB). Setting the field separator to newline (\n) then allows us to save the whole line $3 from one and compare to the other. If they are not equal, the default print outputs $0 (which is the whole record from the getline of fileB).
|
I have 2 files, each containing the same amount of 4-lines records, in the same order:
fileA:
record1 line1=header
record1 line2 X <= this is different but should be ignored
record1 line3 id ABC
record1 line4
record2 line1=header
record2 line2
record2 line3 id DEF <= this is different
record2 line4fileB:
record1 line1=header
record1 line2 Y <= this is different but should be ignored
record1 line3 id ABC
record1 line4
record2 line1=header
record2 line2
record2 line3 id GHI <= this is different
record2 line4For each record, I want to compare its line3 between 2 files and if line3 are different, save the whole record (lines1-4) of fileB; in the example above, record1 will be ignored and record2 saved.
I have basic knowledge of diff and not sure if it is doable at all. First, I don't know how to compare only every 3rd line and ignore the others; second, -C defines symmetrical context, i.e. equal number of lines before and after the difference...
UPD. Initially I had a mistake in my examples: blank line between records which I don't have in my real files. I apologize for this.
Based on @stteldriver's answer, I have the following solution:
awk '
NR%4==3 {
lineA3=$0;
getline lineB1 < "fileB";
getline lineB2 < "fileB";
getline lineB3 < "fileB";
getline lineB4 < "fileB";
if (lineA3 != lineB3) {printf "%s\n%s\n%s\n%s\n", lineB1,lineB2,lineB3,lineB4;}
}' fileAIt works perfectly! Though the code is quite ugly (I'm only starting to learn awk!), will be grateful if you can optimize it.
| Compare every nth line in 2 files and save (asymmetric) context |
You can use pv as a progress indicator, and pipe that to the shasum function to check the hash to see if they are identical.
pv file1 | shasum
1.08MiB 0:00:00 [57.5MiB/s] [====================================>] 100%
303462e848ecbec5f8ab12718fa6239713eda1c6 -pv file2 | shasum
1.08MiB 0:00:00 [57.5MiB/s] [====================================>] 100%
303462e848ecbec5f8ab12718fa6239713eda1c6 - |
In a Unix command line context I would like to compare two truly huge files (around 1TB each), preferable with a progress indicator.
I have tried diff and cmp, and they both crashed the system (macOS Mojave), let alone giving me a progress bar.
What's the best way to compare these very large files?
Additional Details:I just want to check that they are identical.cmp crashed the system in a way that the system did restart by itself. :-( Maybe the system ran out of memory? | How to compare huge files with progress information |
Use grep:
$ grep -Ff f1 f2
palm
calmman grep:
-F, --fixed-strings
Interpret PATTERN as a list of fixed strings (instead of regular
expressions), separated by newlines, any of which is to be
matched.
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. If this option is used
multiple times or is combined with the -e (--regexp) option,
search for all patterns given. The empty file contains zero
patterns, and therefore matches nothing. |
File 1:
happy
sad
calm
palmFile 2:
palm
dream
calmI want to compare the two files and display only those line that are common in both the files, but I want to maintain the order of File 2. My output should be:
palm
calmI know I can use comm after sorting the files but I want to maintain the order. Is there any way to do this?
| Compare two files line by line without comm (I need to maintain order of file 1) |
For example, given the files
shell> ssh admin@test_11 ls -1 /tmp/dir[1,2]
/tmp/dir1:
file1
file2/tmp/dir2:
file1
file2shell> ssh admin@test_13 ls -1 /tmp/dir[1,2]
/tmp/dir1:/tmp/dir2:and the differences
shell> ssh admin@test_11 diff /tmp/dir1/file1 /tmp/dir2/file1
31,32d30
< User1:*:1002:1004:My User1:/home/User1:/bin/sh
< MyUser1:*:1003:1005:My User1:/home/MyUser1:/bin/shshell> ssh admin@test_11 diff /tmp/dir1/file2 /tmp/dir2/file2
33,34d32
< alice:*:1004:1006:Alice:/home/alice:/bin/sh
< bob:*:1005:1007:Bob:/home/bob:/bin/shDeclare the variables
dir1: /tmp/dir1
dir2: /tmp/dir2
dir1_files: "{{ out_dir1.files|map(attribute='path') }}"
dir2_files: "{{ out_dir2.files|map(attribute='path') }}"and find the files
- find:
paths: "{{ dir1 }}"
file_type: file
register: out_dir1
- find:
paths: "{{ dir2 }}"
file_type: file
register: out_dir2Find the shared files. Declare the variable
files_common: "{{ dir1_files|map('basename')|
intersect(dir2_files|map('basename')) }}"gives (abridged) for test_11 and test_13 respectively
files_common:
- file1
- file2 files_common: []Compare the shared files. You have to ignore the errors because diff returns rc=1 if the files differ
- command: "diff {{ dir1 }}/{{ item }} {{ dir2 }}/{{ item }}"
loop: "{{ files_common }}"
ignore_errors: true
register: out_diffand create the report
- debug:
msg: |
{% for i in out_diff.results %}
{{ i.cmd|join(' ') }}: |
{{ i.stdout|indent(2) }}
{% endfor %}gives (abridged) for test_11 and test_13 respectively
msg: |-
diff /tmp/dir1/file1 /tmp/dir2/file1: |
31,32d30
< User1:*:1002:1004:My User1:/home/User1:/bin/sh
< MyUser1:*:1003:1005:My User1:/home/MyUser1:/bin/sh
diff /tmp/dir1/file2 /tmp/dir2/file2: |
33,34d32
< alice:*:1004:1006:Alice:/home/alice:/bin/sh
< bob:*:1005:1007:Bob:/home/bob:/bin/sh msg: ""Example of a complete playbook for testing
- hosts: all vars: dir1: /tmp/dir1
dir2: /tmp/dir2
dir1_files: "{{ out_dir1.files|map(attribute='path') }}"
dir2_files: "{{ out_dir2.files|map(attribute='path') }}"
files_common: "{{ dir1_files|map('basename')|
intersect(dir2_files|map('basename')) }}"
tasks: - find:
paths: "{{ dir1 }}"
file_type: file
register: out_dir1
- find:
paths: "{{ dir2 }}"
file_type: file
register: out_dir2
- debug:
var: files_common - command: "diff {{ dir1 }}/{{ item }} {{ dir2 }}/{{ item }}"
loop: "{{ files_common }}"
ignore_errors: true
register: out_diff
- debug:
msg: |
{% for i in out_diff.results %}
{{ i.cmd|join(' ') }}: |
{{ i.stdout|indent(2) }}
{% endfor %}Q: "The msg module output is not to my liking I would rather prefer a format similar to that given in Q inside the output section can it be achieved?"
A: Sure. Try the template below. If this is not what you like edit your question and provide an example of what you like better
- debug:
msg: |
{% for i in out_diff.results %}
{{ i.cmd.1|basename }} from {{ i.cmd.1|dirname }}:
{{ i.stdout|indent(2) }}
{{ i.cmd.2|basename }} from {{ i.cmd.2|dirname }}:
{{ i.stdout|indent(2) }}
{% endfor %}The format of the output depends on the callback. See DEFAULT_STDOUT_CALLBACK. If you use ansible.builtin.default callback set ANSIBLE_CALLBACK_RESULT_FORMAT=yaml
shell> ANSIBLE_STDOUT_CALLBACK=default ANSIBLE_CALLBACK_RESULT_FORMAT=yaml ansible-playbook pb.yml ...
ok: [test_11] =>
msg: |-
file1 from /tmp/dir1:
31,32d30
< User1:*:1002:1004:My User1:/home/User1:/bin/sh
< MyUser1:*:1003:1005:My User1:/home/MyUser1:/bin/sh
file1 from /tmp/dir2:
31,32d30
< User1:*:1002:1004:My User1:/home/User1:/bin/sh
< MyUser1:*:1003:1005:My User1:/home/MyUser1:/bin/sh
file2 from /tmp/dir1:
33,34d32
< alice:*:1004:1006:Alice:/home/alice:/bin/sh
< bob:*:1005:1007:Bob:/home/bob:/bin/sh
file2 from /tmp/dir2:
33,34d32
< alice:*:1004:1006:Alice:/home/alice:/bin/sh
< bob:*:1005:1007:Bob:/home/bob:/bin/sh
ok: [test_13] =>
msg: ""See
shell> ansible-doc -t callback ansible.builtin.default |
I have 2 directories
$ tree dir{1..2}
dir1
├── file1
└── file2
dir2
├── file1
└── file2I want to compare all files in dir1 with all files in dir2 using ansible
and print differences like this
output:
${file1} from ${dir1}:
diff content${file1} from ${dir2}:
diff contentand it will loop through all files to print their differences
below is ansible snippet that needs modification
---
- name: Compare files in two directories
hosts: localhost
gather_facts: false
tasks:
- name: Find files in directory 1
find:
paths: ~/dir1
file_type: file
register: dir1_files - name: Find files in directory 2
find:
paths: ~/dir2
file_type: file
register: dir2_files - name: Compare files
shell: diff dir1/file1 dir2/file1 ## how can I make sure path changes but filenames stay same using variables
loop: "{{ dir1_files.files }}"
register: diff_output
changed_when: false
failed_when: diff_output.rc == 1 - name: Print differences
debug:
msg: |
{{ item.item.path }} from dir1:
{{ item.stdout }}
{{ item.item.path }} from dir2:
{{ item.stdout }}
loop: "{{ diff_output.results }}"
when: item.stdout_lines | length > 0For suggested code in Vladimir's answer, I get below output
TASK [debug] *****************************************************************************************************************************************
ok: [localhost] => {
"msg": "file2 from dir1: \n 1,2c1,2\n < abc123\n < def456\n ---\n > abc101\n > def111\nfile2 from dir2: \n 1,2c1,2\n < abc123\n < def456\n ---\n > abc101\n > def111\nfile1 from dir1: \n 1,2c1,2\n < 123abc\n < 456def\n ---\n > 101abc\n > 111def\nfile1 from dir2: \n 1,2c1,2\n < 123abc\n < 456def\n ---\n > 101abc\n > 111def\n"
} | ansible comparing all files in 2 directories and printing the difference |