output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
GNU Parallel spends 2-10 ms overhead per job. It can be lowered a bit by using -u, but that means you may get output from different jobs mixed.
GNU Parallel is not ideal if your jobs are in the ms range and runtime matters: The overhead will often be too big.
You can spread the overhead to multiple cores by running multiple GNU Parallels:
seq 5000 | parallel --pipe --round-robin -N100 parallel isprimeYou still pay the overhead, but now you at least have more cores to pay with.
A better way would be to change isprime so that it takes multiple inputs and thus takes longer to run:
isprime() {
_isprime () {
local n=$1
((n==1)) && return 1
for ((i=2;i<n;i++)); do
if ((n%i==0)); then
return 1
fi
done
printf '%d\n' "$n"
}
for t in "$@"; do
_isprime $t
done
}
export -f isprimeseq 5000 | parallel -X isprime
# If you do not care about order, this is faster because higher numbers always take more time
seq 5000 | parallel --shuf -X isprime |
I created this script out of boredom with the sole purpose of using/testing GNU parallel so I know it's not particularly useful or optimized, but I have a script that will calculate all prime numbers up to n:
#!/usr/bin/env bashisprime () {
local n=$1
((n==1)) && return 1
for ((i=2;i<n;i++)); do
if ((n%i==0)); then
return 1
fi
done
printf '%d\n' "$n"
}for ((f=1;f<=$1;f++)); do
isprime "$f"
doneWhen run with the loop:
$ time ./script.sh 5000 >/dev/nullreal 0m28.875s
user 0m38.818s
sys 0m29.628sI would expect replacing the for loop with GNU parallel would make this run significantly faster but that has not been my experience. On average it's only about 1 second faster:
#!/usr/bin/env bashisprime () {
local n=$1
((n==1)) && return 1
for ((i=2;i<n;i++)); do
if ((n%i==0)); then
return 1
fi
done
printf '%d\n' "$n"
}export -f isprimeseq 1 $1 | parallel -j 20 -N 1 isprime {}Run with parallel:
$ time ./script.sh 5000 >/dev/nullreal 0m27.655s
user 0m38.145s
sys 0m28.774sI'm not really interested in optimizing the isprime() function, I am just wondering if there is something I can do to optimize GNU parallel?
In my testing seq actually runs faster than for ((i=1...)) so I don't think that has much if anything to do with the runtimeInterestingly, if I modify the for loop to:
for ((f=1;f<=$1;f++)); do
isprime "$f" &
done | sort -nIt runs even quicker:
$ time ./script.sh 5000 >/dev/nullreal 0m5.995s
user 0m33.229s
sys 0m6.382s | How to optimize GNU parallel for this use? |
Don't reinvent the wheel. You can use pigz, a parallel implementation of gzip which should be in your distributions repositories. If it isn't, you can get it from here.
Once you've installed pigz, use it as you would gzip:
pigz *txtI tested this on 5 30M files created using for i in {1..5}; do head -c 50M /dev/urandom > file"$i".txt; done:
## Non-parallel gzip
$ time gzip *txt
real 0m8.853s
user 0m8.607s
sys 0m0.243s## Shell parallelization (same idea as yours, just simplified)
$ time ( for i in *txt; do gzip $i & done; wait)real 0m2.214s
user 0m10.230s
sys 0m0.250s## pigz
$ time pigz *txtreal 0m1.689s
user 0m11.580s
sys 0m0.317s |
I am looking to speedup gzip process. (server is AIX 7.1)
More specificly, the current implementation is with gzip *.txt and it takes up to 1h to complete.
(file extractions are quite big and we got a total of 10 files)
Question: Will it be more efficient to run
pids=""
gzip file1.txt &
pids+=" $!"
gzip file2.txt &
pids+=" $!"
wait $pidsthan
gzip *.txt Is the gzip *txt behavior is the same in terms of parallelism, cpu consumption etc as the gzip in the background (&) or the other option will be more efficient?
| gzip *.txt vs gzip test.txt & gzip test2.txt & |
The printf call would run one or more write(2) system calls, and the order they are processed would be the actual order of the output. One or more, because it depends on the buffering inside the C library. With line-buffered output (going to the terminal), you'd likely get two write calls, once for the initial newline, and another for the rest.
write(1, "\n", 1);
write(1, " 1234567890 \n", 13);It's possible for another process to be scheduled between the calls, giving
first the two empty lines, then the lines with the digits, but given that there's not much processing going on, it's unlikely on an unloaded system.
Note that since both processes print the exact same thing, it doesn't matter which one goes first, as long as one does not interrupt the other.
If the output goes to a file or a pipe, the default is for it to be fully buffered, so you'd probably only get one write call (per process), and no chance of mixed output.
Your example of intermixed digits would be possible if the digits were being output one-by-one, with individual system calls. It's hard to understand why a sensible library implementation would do that when printing a static string whose length is known. With more writes in a loop, intermixed output would be more likely:
Something like this:
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <stdlib.h>int main(int argc, char *argv[])
{
int i;
setbuf(stdout, NULL); /* explicitly unbuffered */
int x = fork();
for (i = 0 ; i < 500 ; i++) {
printf("%d", !!x);
}
if (x) {
wait(NULL);
printf("\n");
}
return 0;
}Gives me output like below. Most of the time that is, not always. It's up to the system to decide how to schedule the processes. The unpredictability is why we usually try to avoid race-conditions.
111111111111111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111111111111111
111100000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000010000001001100
110000000011001100110011000000010011001100110000000100110011001100000001001
100110011000000010011001100110000000100110011001100000001001100110011000000
... |
The following C program is supposed to illustrate race condition b/w child and parent processes :
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>int main()
{
fork();
printf("\n 1234567890 \n");
return 0;
}When my friends executes it (on Ubuntu), they get expected output, which is jumbled 1234567890s
One example : 12312345645678907890
But when I try the same program on my Arch Linux, it never gives such an output. Its always one after the other.
1234567890 1234567890 I like that arch linux is some how avoiding race condition, But I would like to disable any such features and would like to get output as my friend's.
| Race Condition not working on Arch Linux |
The important part is having a way to merge the output of your several openssl commands. I believe a FIFO would solve your problem. Try this
mkfifo foo
openssl <whatever your command is> > foo &
openssl <whatever your command is> > foo &
openssl <whatever your command is> > foo &
dd if=foo of=/dev/sda bs=4MEDIT: Add as many of the openssl lines as you need to max out your system; you can even add them after dd invocation.
As mentioned by the OP in the comments below, it is possible to cat foo | pv | dd of=/dev/sda to monitor progress.
|
I am using @tremby's great idea to fill a disk with random data.
This involves piping openssl, which is encrypting a lot of zeros, to dd (bs=4M).
I'm maxing out the single core on which this is being run (I have 7 more), and I'm nowhere near maxing out my I/O.
I'm looking for a way to parallelize the input to dd.
I suppose I could do it like this, but what I'm really looking for is a way to parallelize openssl and write that to dd so that the write to the disk is sequential.
Does anyone have a suggestion?
| Parallelize openssl as input to dd |
Look at the CONCURRENCY variable in /etc/init.d/rc, you have several choices.
When set to makefile, then the init process does it in parallel.
There are different comments depending your distribution:
#
# Check if we are able to use make like booting. It require the
# insserv package to be enabled. Boot concurrency also requires
# startpar to be installed.
#
CONCURRENCY=makefile# Specify method used to enable concurrent init.d scripts.
# Valid options are 'none' and 'makefile'. Obsolete options
# used earlier are 'shell' and 'startpar'. The obsolete options
# are aliases for 'makefile' since 2010-05-14. The default since
# the same date is 'makefile', as the init.d scripts in Debian now
# include dependency information and are ordered using this
# information. See insserv for information on dependency based
# boot sequencing.
#CONCURRENCY=makefile
CONCURRENCY=noneSee also the line in your init script:
eval "$(startpar -p 4 -t 20 -T 3 -M $1 -P $previous -R $runlevel)"
See also man startpar
Good hint from Timo: The Bootchart package lets you visualize your boot process.
Good reads: init, SysV, History
[edit]
It is often difficult to use bootchart, so here a howto:
Bootchart Micro Howtoinstall it apt-get install bootchart2 pybootchartgui
reboot
in the boot screen of grub press e for edit.
then find the line with kernel boot parameters and add init=/sbin/bootchartd
press F10 for boot
after you OS is up and running open a terminal window and run sudo pybootchartgui
you'll find your bootchart.png in the working directory | I was wondering if I could manage to initialize drivers, services etc. (all these jobs what Linux does during startup) in parallel instead of sequentially. I want to aggressively lower the boot time. I know some services depend on each other but to make an easy example: during probing the network devices, it shall take care of the audio too, instead of waiting 10 s until the network is ready.
I heard of concepts like systemd and InitNG but I'm sure there has to be other methods. Isn't there an option or flag for the kernel itself to boot this way?
| Can I force Linux to boot its initializations parallel? [closed] |
If you use GNU Parallel you can do one of these:
parallel cp {} destination/folder/ :::: filelist
parallel -a filelist cp {} destination/folder/
cat filelist | parallel cp {} destination/folder/Consider spending 20 minutes on reading chapter 1+2 of the GNU Parallel 2018 book (print: http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html online: https://doi.org/10.5281/zenodo.1146014). Your command line will love you for it.
|
Let's say I have a file listing the path of multiple files like the following:
/home/user/file1.txt
/home/user/file2.txt
/home/user/file3.txt
/home/user/file4.txt
/home/user/file5.txt
/home/user/file6.txt
/home/user/file7.txtLet's also say that I want to copy those files in parallel 3 by 3. I know that with the command parallel I can execute a specific command in parallel as the following:
parallel bash -c "echo hello world" -- 1 2 3However, this way of running parallel is hardcoded because even if I use a variable inside the quotes, it will only have a fixed parameter. I'd like to execute the parallel command getting parameters dynamically from a file. As an example, let's say I'd like to copy all files from my file running three parallel processes (something like cp "$file" /home/user/samplefolder/). How can I do it? Is there any parameter I can use with parallel to accomplish that and get parameters dynamically from a file?
| How can I use the parallel command while getting parameters dynamically from a file? |
Something like this:
gpus=2find files |
parallel -j +$gpus '{= $_ = slot() > '$gpus' ? "foo" : "bar" =}' {}Less scary:
parallel -j +$gpus '{=
if(slot() > '$gpus') {
$_ = "foo"
} else {
$_ = "bar"
}
=}' {}-j +$gpus Run one job per CPU thread + $gpus
{= ... =} Use perl code to set $_.
slot() Job slot number (1..cpu_threads+$gpus).
|
I have a large dataset (>200k files) that I would like to process (convert files into another format). The algorithm is mostly single-threaded, so it would be natural to use parallel processing. However, I want to do an unusual thing. Each file can be converted using one of two methods (CPU- and GPU-based), and I would like to utilize both CPU and GPU at the same time.
Speaking abstractly, I have two different commands (foo and bar), which are supposed to produce equivalent results. I would like to organize two thread pools with fixed capacity that would run up to N instances of foo and M instances of bar respectively, and process each input file with either of those pools depending on which one has free slots (determinism is not required or desired).
Is it possible to do something like that in GNU parallel or with any other tool?
| Process multiple inputs with multiple equivalent commands (multiple thread pools) in GNU parallel |
I do not know ngram-merge so I use cat:
n=$(ls | wc -l)
while [ $n -gt 1 ]; do
parallel -N2 '[ -z "{2}" ] || (cat {1} {2} > '$n'.{#} && rm -r {} )' ::: *;
n=$(ls | wc -l);
doneBut it probably looks like this:
n=$(ls | wc -l)
while [ $n -gt 1 ]; do
parallel -N2 '[ -z "{2}" ] || ( /vol/customopt/lamachine.stable/bin/ngram-merge -write '$n'.{#} -- {1} {2} && rm -r {} )' ::: *;
n=$(ls | wc -l)
done |
I'd like to combine a bunch of language model (LM) count files using SRILM's ngram-merge program. With that program, I could combine a whole directory of count files, e.g. ngram-merge -write combined_file -- folder/*. However, with my amount of data it would run for days, so I'd love to merge the files in parallel!
The script below does basically the following:It splits the files in the directory into two equally sized sets (if the number of files is odd, two files are merged before the sets will be build)
It loops through the sets and merges two files together, whereby the new file is written into a new subdirectory (this should be done in parallel)
It examines whether there is only one file in the new subdirectory; if not, 1. starts again in the newly created subdirectoryThe script works, unfortunately, though, the ngram-merge commands are not compiled in parallel. Can you fix that? Furthermore, the folder structure that is created on the fly is kinda ugly. And I'm also not a shell-expert. So I'll be thankful for every remark that makes the whole thing more elegant!!! Thx :-)
#!/bin/bash# Get arguments
indir=$1
# Count number of files
number="$(ls -1 $indir | wc -l)"
# Determine number of cores to be used in parallel
N=40# While more than one file, combine files
while [ "$number" -gt 1 ]; do # determine splitpoint
split="$((number/2))"
# Determine whether number of files is odd
if [ $((number%2)) -eq 1 ]
# if it is odd, combine first and last file and rm last file
then
first="$indir$(ls -1 $indir | head -1)"
last="$indir$(ls -1 $indir | tail -1)"
new="$first""$last"
/vol/customopt/lamachine.stable/bin/ngram-merge -write $new -- $first $last && rm -r $first $last
fi # Determine first half of files and second half
set1="$(ls -1 $indir | head -$split)"
set2="$(ls -1 $indir | head -$((split*2)) | tail -$split)"
# Make new dir
newdir="$indir"merge/
mkdir $newdir # Paralelly combine files pairwise and save output to new dir
(
for i in $(seq 1 $split); do
file1="$indir$(echo $set1 | cut -d " " -f $i)"
file2="$indir$(echo $set2 | cut -d " " -f $i)"
newfile="$newdir""$i".counts
/vol/customopt/lamachine.stable/bin/ngram-merge -write $newfile -- $file1 $file2 && rm -r $file $file2
((i=i%N)); ((i++==0)) && wait
done
) # Set indir = newdir and recalculate number of files
indir=$newdir
number="$(ls -1 $indir | wc -l)"done | Pairwise merging of files in parallel until only 1 file left over |
Depends how you create the subshell.
( command ) will run command in a subshell and wait for the subshell to complete before continuing.
command & will run command in a subshell in the background. The shell will continue on to the next command without waiting for the subshell to finish. This may be used for parallel processing.
coproc command is similar to command &, but also establishes a two-way pipe between the main shell and the subshell.
|
This question follows from this one and one of its answers' recommendation to read Linuxtopia - Chapter 20. Subshells.
I'm a bit confused by this statement at the Linuxtopia site:subshells let the script do parallel processing, in effect executing multiple subtasks simultaneously.
https://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.htmlDoes this mean that subshells, run from a script, are always run in parallel to the original script? Experimentally, this does not appear to be the case, but I'd be grateful for expert confirmation one way or the other.
#! /usr/bin/bash# This script reliably prints:
# hello
# world
# 0
# ...implying that the the subshell is not run in parallel with this script.(echo hello;
echo world)echo $?.
#! /usr/bin/bash# This script prints:
# 0
# hello
# world
# ...implying that the the subshell is run in parallel with this script.(echo hello;
echo world) &echo $?Is the use of & what the Linuxtopia site might have meant by "[letting] the script do parallel processing"?Note: I'm familiar with the concept of suffixing commands with & in bash...it runs said command as a background process. So my question is more about whether command(s) executed in a subshell are run as background/parallel process by default, or if addition of the & here, as well, is what causes the background/parallel execution. The wording of the Linuxtopia article, to me, implied the former, which doesn't appear to match observation.
| Are subshells run in parallel by default? |
Something like this:
# Your variable initialization
readonly FOLDER_LOCATION=/export/home/username/pooking/primary
readonly MACHINES=(testMachineB testMachineC)
PARTITIONS=(0 3 5 7 9 11 13 15 17 19 21 23 25 27 29) # this will have more file numbers around 400dir1=/data/snapshot/20140317# delete all the files first
find "$FOLDER_LOCATION" -mindepth 1 -delete# Bash function to copy a single file based on your script
do_copy() {
el=$1
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[0]}:$dir1/s5_daily_1980_"$el"_200003_5.data $PRIMARY/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 david@${FILERS_LOCATION[1]}:$dir1/s5_daily_1980_"$el"_200003_5.data $PRIMARY/.
}# export -f is needed so GNU Parallel can see the function
export -f do_copy# Run 5 do_copy in parallel. When one finishes, start another.
# Give them each an argument from PRIMARY_PARTITION
parallel -j 5 do_copy ::: "${PRIMARY_PARTITION[@]}"To learn more:Watch the intro video for a quick introduction:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). You command line
will love you for it.10 seconds installation of GNU Parallel:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash |
I am trying to copy files from testMachineB and testMachineC into testMachineA as I am running my shell script on testMachineA.
If the file is not there in testMachineB, then it should be there in testMachineC for sure. So I will try to copy file from testMachineB first, if it is not there in testMachineB then I will go to testMachineC to copy the same files.
PARTITIONS is the file partition number which I need to copy in testMachineA in directory FOLDER_LOCATION.
#!/bin/bashreadonly FOLDER_LOCATION=/export/home/username/pooking/primary
readonly MACHINES=(testMachineB testMachineC)
PARTITIONS=(0 3 5 7 9 11 13 15 17 19 21 23 25 27 29) # this will have more file numbers around 400dir1=/data/snapshot/20140317# delete all the files first
find "$FOLDER_LOCATION" -mindepth 1 -delete
for el in "${PARTITIONS[@]}"
do
scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 username@${MACHINES[0]}:$dir1/s5_daily_1980_"$el"_200003_5.data $FOLDER_LOCATION/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 username@${MACHINES[1]}:$dir1/s5_daily_1980_"$el"_200003_5.data $FOLDER_LOCATION/.
doneProblem Statement:-
Now what I am trying to do is - Split the PARTITIONS array which contains the partition number in set of five files. So I will copy first set which has 5 files in parallel. Once these five files are done, then I will move to next set which has another five files and download them in parallel again and keep on doing this until all the files are done.
I don't want to download all the files in parallel, just five files at a time.
Is this possible to do using bash shell scripting?
Update:-
Something like this you are suggesting?
echo $$readonly FOLDER_LOCATION=/export/home/username/pooking/primary
readonly MACHINES=(testMachineB testMachineC)
ELEMENTS=(0 3 5 7 9 11 13 15 17 19 21 23 25 27 29)
LEN_ELEMENTS=${#ELEMENTS[@]}
X=0dir1=/data/snapshot/20140317function download() {
if [[ $X < $LEN_ELEMENTS ]]; then
(scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 username@${MACHINES[0]}:$dir1/s5_daily_1980_"${ELEMENTS[$X]}"_200003_5.data $FOLDER_LOCATION/. || scp -o ControlMaster=auto -o 'ControlPath=~/.ssh/control-%r@%h:%p' -o ControlPersist=900 username@${MACHINES[1]}:$dir1/s5_daily_1980_"${ELEMENTS[$X]}"_200003_5.data $FOLDER_LOCATION/.) && kill -SIGHUP $$ 2>/dev/null &
fi
}trap 'X=$((X+1)); download' SIGHUP# delete old files
find "$FOLDER_LOCATION" -mindepth 1 -delete# initial loop
for x in {1..5}
do
download
done# waiting loop
while [ $X -lt $LEN_ELEMENTS ]
do
sleep 1
doneDoes above looks right? And also, now where do I put my delete command?
| How to split the array in set of five files and download them in parallel? |
To debug this I will suggest you first run this with something simpler than gdalmerge_and_clean.
Try:
seq 100 | parallel 'seq {} 100000000 | gzip | wc -c'Does this correctly run one job per CPU thread?
seq 100 | parallel -j 95% 'seq {} 100000000 | gzip | wc -c'Does this correctly run 19 jobs for every 20 CPU threads?
My guess is that gdalmerge_and_clean is actually run in the correct number of instances, but that it depends on I/O and is waiting for this. So your disk or network is pushed to the limit while the CPU is sitting idle and waiting.
You can verify the correct number of copies is started by using ps aux | grep gdalmerge_and_clean.
You can see if your disks are busy with iostats -dkx 1.
|
How can I get reasonable parallelisation on multi-core nodes without saturating resources? As in many other similar questions, the question is really how to learn to tweak GNU Parallel to get reasonable performance.
In the following example, I can't get to run processes in parallel without saturating resources or everything seems to run in one CPU after using some -j -N options.
From inside a Bash script running in a multi-core machine, the following loop is passed to GNU Parallel
for BAND in $(seq 1 "$BANDS") ;do
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y"
done |parallelThis saturates, however, the machine and slows down processing.
In man parallel I read--jobs -N
-j -N
--max-procs -N
-P -N
Subtract N from the number of CPU threads.
Run this many jobs in parallel. If the evaluated number is less than 1 then 1
will be used.
See also: --number-of-threads --number-of-cores --number-of-socketsand I've tried to use
|parallel -j -3but this, for some reason, uses only one CPU out of the 40. Checking with [h]top, only one CPU is reported high-use, the rest down to 0. Should -j -3 not use 'Number of CPUs' - 3 which would be 37 CPUs for example?
and I extended the previous call then
-j -3 --use-cores-instead-of-threadsblindly doing so, I guess. I've read https://unix.stackexchange.com/a/114678/13011, and I know from the admins of the cluster I used to run such parallel jobs, that hyperthreading is disabled. This is still running in one CPU.
I am now trying to use the following:
for BAND in $(seq 1 "$BANDS") ;do
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y"
done |parallel -j 95%or with |parallel -j 95% --use-cores-instead-of-threads.
Note
For the record, this is part of a batch job, scheduled via HTCondor and each job running on a separate node with some 40 physical CPUs available.
Above, I kept only the essential -- the complete for loop piped to parallel is:for BAND in $(seq 1 "$BANDS") ;do
# Do not extract, unscale and merge if the scaled map exists already!
SCALED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged_scaled.nc"
MERGED_MAP="era5_and_land_${VARIABLE}_${YEAR}_band_${BAND}_merged.nc"
if [ ! -f "${SCALED_MAP+set}" ] ;then
echo "log $LOG_FILE Action=Merge, Output=$MERGED_MAP, Pixel >size=$OUTPUT_PIXEL_SIZE_X $OUTPUT_PIXEL_SIZE_Y, Timestamp=$(timestamp)"
echo "gdalmerge_and_clean $VARIABLE $YEAR $BAND $OUTPUT_PIXEL_SIZE_X >$OUTPUT_PIXEL_SIZE_Y"
else
echo "warning "Scaled map "$SCALED_MAP" exists already! Skipping merging.-""
fi
done |parallel -j 95%
log "$LOG_FILE" "Action=Merge, End=$(timestamp)"where `log` and `warning` are a custom functions | GNU Parallel with -j -N still uses one CPU |
Some things you’re missing:Shell variables are stored in shell memory;
i.e.,the memory of the shell process.
Most commands that you run from a shell
are run in a child process (or processes).
Theonly exceptions are “built-in commands”.
Asynchronous commands are always run in a child process
— even if they don’t run any programs.
Anasynchronous command that doesn’t run any programs
isachild process that only runs theshell.
Thisis known as a “sub-shell”.
Generally speaking, processes can’t change other processes’ memory.
In particular, sub-shells can’t modify variables in the main shell process.
So when you say appendnum$no&, the appendnum function
cannot modify the x variable in the main shell process.You can get something like the behavior you’re trying to get with this:
x=TR007.out
> "$x"
appendnum() {
echo "$1" >> "$x"
}
for no in {0..10}
do
appendnum $no &
done
waitYou will get the numbers 0 through 10 written to the file TR007.out.The scheduling (sequencing) of asynchronous processes is indeterminate.
Therefore, in the above example script,
while you will get the numbers 0 through 10 written to the file,
they might not be in order.
As you might know, wait by itself (with no arguments)
will wait for all child processes.
“Irrespective of number of task, my response time should be same.”
That’s a very bold expectation / request.
Whether it is reasonable depends on context.
If the task is a single-threaded compute-intensive one,
and you have three or more (logical) CPUs, then, yes,
it may be reasonable to expect three tasks run in parallel
to take little more time than one by itself.
But if you have four logical CPUs,
itis totally unreasonable to expect to run 50tasks
in the same amount of time it takes to run one.
I mentioned that the child (asynchronous) processes
run in an expected order.
Since they are running concurrently (i.e., in parallel),
their execution will likely overlap.
So, if we change the above script to doappendnum() {
echo "$1"a >> "$x"
echo "$1"b >> "$x"
}
for no in {1..3}
do
appendnum $no &
donethen you might get 1a/1b/2a/2b/3a/3b in the file —
or you might get 2a/2b/1a/1b/3a/3b,
or you might get 2a/1a/2b/3a/3b/1b, or worse.
Having asynchronous processes writing to the same file is a bad idea.You should probably do something likefor no in {1..3}
do
task"$no" > file"$no" &
done
wait
cat file1 file2 file3 > combined_result
Other notes:$(command) does the same thing
as `command`.
You should stick with the $(command) form.
It doesn’t make sense to say x=`echo$x$num`
or x=$(echo$x$num).
Just say x="$x$num".
You should always quote shell variables
unless you have a good reason not to,
and you’re sure youknow what you’redoing.
So don’t do appendnum$no; do appendnum"$no", etc. |
I am trying to understand parallel processing in shell scripting and sequentially appending values deterministically (no random order) in the output through a simple example. Below is the code snippet:
x=""
appendnum() {
num=$1; x=`echo $x$num`
}
for no in {0..10}
do
appendnum $no &
done
wait $(jobs -rp)
echo $xThe expected output is 012345678910, but it’s resulting in a null value. I even tried it with iterating the PID to wait until it completes, but was unsuccessful. I want the main thread to wait till every parallel process completes. Appending number was just an example.
My problem statement looks like this:
considering I have 3 tasks, I want list of responses like [responseof(task1),responseof(task2),responseof(task3)]. Count of tasks can be upto50. Irrespective of number of tasks, my response time should be same. What is the most efficient and correct way of doing this?
| Shell parallel processing: appending values |
Adjust -jXXX% as needed:
PARALLEL=-j200%
export PARALLELarin() {
#to get network id from arin.net
i="$@"
xidel http://whois.arin.net/rest/ip/$i -e "//table/tbody/tr[3]/td[2] " |
sed 's/\/[0-9]\{1,2\}/\n/g'
}
export -f ariniptrac() {
# to get other information from ip-tracker.org
j="$@"
xidel http://www.ip-tracker.org/locator/ip-lookup.php?ip=$j -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[2]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[3]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[4]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[5]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[6]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[7]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[8]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[9]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[10]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[11]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[12]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[13]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[14]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[15]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[16]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[17]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[18]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[19]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[20]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[21]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[22]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[23]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[24]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[25]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[26]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[27]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[28]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[29]"
}
export -f iptracegrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" test-data.csv | sort | uniq |
parallel arin |
sort | uniq | egrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" |
parallel iptrac > abcd |
I have a test file that looks like this
5002 2014-11-24 12:59:37.112 2014-11-24 12:59:37.112 0.000 UDP ...... 23.234.22.106 48104 101 0 0 8.8.8.8 53 68.0 1.0 1 0.0 0 68 0 48Each line contains a source ip and destination ip. Here, source ip is 23.234.22.106 and destination ip is 8.8.8.8. I am doing ip lookup for each ip address and then scraping the webpage using xidel. Here is the script.
egrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" test-data.csv | sort | uniq | while read i #to get network id from arin.net
do
xidel http://whois.arin.net/rest/ip/$i -e "//table/tbody/tr[3]/td[2] " | sed 's/\/[0-9]\{1,2\}/\n/g'
done | sort | uniq | egrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" |
while read j ############## to get other information from ip-tracker.org
do
xidel http://www.ip-tracker.org/locator/ip-lookup.php?ip=$j -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[2]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[3]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[4]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[5]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[6]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[7]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[8]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[9]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[10]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[11]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[12]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[13]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[14]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[15]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[16]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[17]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[18]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[19]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[20]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[21]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[22]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[23]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[24]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[25]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[26]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[27]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[28]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[29]"
done > abcdThe first xidel is used to scrap arin and second xidel is used to scrap this
The output of first xidel is network id. The ip lookup is done based on network id. The output of second xidel is like this
IP Address: 8.8.8.0
[IP Blacklist Check]
Reverse DNS:** server can't find 0.8.8.8.in-addr.arpa: SERVFAIL
Hostname: 8.8.8.0
IP Lookup Location For IP Address: 8.8.8.0
Continent:North America (NA)
Country: United States (US)
Capital:Washington
State:California
City Location:Mountain View
Postal:94040
Area:650
Metro:807
ISP:Level 3 Communications
Organization:Level 3 Communications
AS Number:AS15169 Google Inc.
Time Zone: America/Los_Angeles
Local Time:10:51:40
Timezone GMT offset:-25200
Sunrise / Sunset:06:26 / 19:48
Extra IP Lookup Finder Info for IP Address: 8.8.8.0
Continent Lat/Lon: 46.07305 / -100.546
Country Lat/Lon: 38 / -98
City Lat/Lon: (37.3845) / (-122.0881)
IP Language: English
IP Address Speed:Dialup Internet Speed
[
Check Internet Speed]
IP Currency:United States dollar($) (USD)
IDD Code:+1As of now, it takes 6 hours to complete this task when there are 1.5 million lines in my test file. This is because the script is running serially.
Is there any way I can divide this task so that the script runs in parallel and the time is reduced significantly. Any help with this would be appreciated.
P.S: I am using a VM with 1 processor and 10 GB RAM
| Multi processing / Multi threading in BASH |
for length in "$(ls $OUT_SAMPLE)"should be rewritten
for length in $(ls $OUT_SAMPLE)In fact you are looping on a single value.
You can verify the values you're looping on with:
for length in "$(ls $OUT_SAMPLE)" ; do
echo x$length
doneTry the same without the double quotes!
|
I have a following line:
for length in "$(ls $OUT_SAMPLE)"
do $CODES/transform_into_line2.rb -c $OUT_SAMPLE -p 0 -f $length &
doneSo,it should parallelize the for loop but somehow it still runs it in a sequence.
However, if I do following:
$CODES/transform_into_line2.rb -c $OUT_SAMPLE -p 0 -f blabla.txt & $CODES/transform_into_line2.rb -c $OUT_SAMPLE -p 0 -f blabla2.txtIt does run it in parallel. Why doesn't a for loop work?
| parallelize shell script |
Having set up a few HPC clusters in my time, I can tell you that what you want to do is going to cause you an enormous amount of trouble with compatibility issues between nodes in the cluster - which is probably why you can't find a direct answer via google.
These compatibility issues include differences in versions of software, system libraries, numerical and computation libraries, C & Fortran etc compilers (and their libraries), PATH & LD_LIBRARY_PATH etc variables, differences between GNU and non-GNU versions of shell utilities, possibly CUDA vs OPENCL (or versions of same) for GPGPU computation, and much much more.
You would run into many of these issues just using two different distros of linux (or even different versions of the same distro on different nodes of the cluster).
You may find that it's simply easier to set up two clusters - one with a single node (the Xeon running linux), and one with several nodes (old Macs running OS X Lion)
However, if that's not an option, the most important thing to consider is the scheduler, not the linux distribution.
I personally wouldn't want to set up what you want, but if I had to, I wouldn't consider using PBS or Torque, I'd use Slurm. Slurm has a lot more fine-grained control over what applications can be run on which nodes. Oracle's Grid Engine is another option that may do what you want, but I'm not familiar enough with it to do more than mention the fact that it exists.
|
I want to build a distributed computing system to run Matlab, C and other programming languages for scientific computing. Now I've several old Mac machines with Lion Mac OS installed acted as web servers or personal computers. I also have one latest 16-Xeon-core machine to be installed with Linux. I've not decided which Linux distribution should we use for the new machine, but we need to consider the following factors. Please help me to decide which Linux distribution I can use, what distributing computing software I can use and how to manage the data backups and queue assignments. All machines with either Mac or Linux OS's can be served as a cluster system for parallel or distributed computing. To be specific, we want to run programs crossing machines in a queue with multiple users and threads. In the case that all machines are not symmetric, but we don't want to lower the speed of the most powerful machine.
The new machine is preferred to be used as a head node, but at least a secondary machine should also be able to act as a head node in case the head node was shut down.
Backup process should be easy to setup, and can be controlled remotely. This is not as important as the first two factors. At least we can backup important data manually. I searched Google already, but I haven't found decent solutions for my case. Thank you in advance for your suggestions!
| Strategies for building distributed computing system with hybrid Mac and Linux systems |
No, cmd1 [newline] cmd2 is not as concurrent as cmd1 & cmd2 — the former waits for cmd1 to complete before starting cmd2.
If you want execution to continue while the first command runs, you need to keep the & when splitting this over two lines:
ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post & ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalogbecomes
ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post &
ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalogIf you want to ensure both commands complete before looping again, you can use wait.
Other shell constructs also provide concurrency, notably pipes: a | b (or its variant in some shells, a |& b, to redirect both output streams¹) runs a and b concurrently, and waits for both to complete before continuing. Some shells also support coprocesses (see How do you use the command coproc in various shells?) and process substitution (see How do you use the command coproc in various shells?) which can be used to run commands concurrently.
See What are the shell's control and redirection operators? for details of the list terminators (which include &) and pipe operators.¹ Except in ksh, where it runs a as a coprocess.
|
Does the following script
#!/usr/bin/bashwhile true
do
ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post & ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalog
donedo exactly the same as
#!/usr/bin/bashwhile true
do
ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post
ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalog
doneFrom my experience the first script creates nalog file sooner (within seconds) than the second script (after more than 10 minutes) which suggests to me that the latter waits for ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalog to finish. It should not be the case because from what I've researched so far the second script is meant to start ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalog without waiting for ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post to finish.
I want the two said ab commands to execute concurrently, how might I accomplish this in Bash?
P.S. It is & and not &&. I know what && does and did not want to apply it here.
| Is 'cmd1 [newline] cmd2' as concurrent as 'cmd1 & cmd2' in a Bash script? |
git does file locking to prevent corrupting the repository. You may get messages like
error: cannot lock ref 'refs/remotes/origin/develop': is at 2cfbc5fed0c5d461740708db3f0e21e5a81b87f9 but expected 36c438af7c374e5d131240f9817dabb27d2e0a2c
From github.com:myrepository
! 36c438a..2cfbc5f develop -> origin/develop (unable to update local ref)
error: cannot lock ref 'refs/remotes/origin/master': is at b9a3f6cf9dafc30df38542e5e51ae4842c50814d but expected 5e6174b3c7071c840effeda6c708d6aef36f7c6a
! 5e6174b..b9a3f6c master -> origin/master (unable to update local ref)from the git processes that fail to get the lock. That is all.
If the two git pull processes are slightly out of sync with each other, the effect will be the same as running the command twice.
|
What happens if two git pull command are run simultaneously in the same directory?
| Running two git commands in parallel |
At the prompt of an interactive zsh shell:
(repeat $(nproc) { {your-command && kill 0} & }; wait)Would run that subshell in a new process group, the first instance of your-command to exit successfully triggers a kill 0 which kills that process group.
You can do the same with the parallel from moreutils with:
parallel sh -c 'your-command && kill 0' -- {1..$(nproc)}({1..5}, is from zsh and supported by many other shells nowadays even bash, but in bash you can't use an expansion in there, you can always replace with $(seq "$(nproc)") there assuming an unmodified $IFS).
Or with GNU xargs:
seq "$(nproc)" | xargs -P0 -n1 sh -c '
your-command && kill 0'Or explicitly binding each job to each CPU:
seq 0 "$(( $(nproc) - 1))" | xargs -P0 -ICPU taskset -c CPU sh -c '
your-command && kill 0' sh-on-cpuCPUThe key each time is to run it at the prompt of an interactive shell so all the commands are put in a process group of their own.
|
I'm running openssl dhparam -out dhparam4096.pem 4096 and it pegs a single core at 100% for the duration of the task (which can be considerable on some processors). I have 1 or more additional cores that are essentially idling, and I'd like to use them.
I'd like to run $(nproc)× instances of that command. The twist is that I don't need all the instances to complete – just the one that exits with 0 first…the remaining processes should be SIGTERM'd or similar as soon as the 'winner' instance is done, it need not be graceful.
I am learning parallel and xargs to achieve this, there are numerous how-to articles for completing all parallel tasks, but I'm falling short on the search engine fu to achieve the above. I'm not married to the idea of using parallel or xargs, I've got the Parallel alternatives page in my reading list once I've found my way with it.
How can I run $(nproc)× instances of a command where the first instance to exit 0 will kill the other instances?
Other info: Debian 11, aarch64, bash.
| How can I run `$(nproc)`× parallel instances of `openssl dhparam` & when first instance exits with `0`, kill the other instances? |
I find your question hard to understand: you seem to want both parallel and sequential execution.
Do you want this?
for t in "${TARGETS[@]}"; do
(
for a in "${myIPs[@]}"; do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
wait
done
) &
doneeach target's for loop is run in a subshell in the background.
|
#!/usr/bin/bashTARGETS=(
"81.176.235.2"
"81.176.70.2"
"78.41.109.7"
)myIPs=(
"185.164.100.1"
"185.164.100.2"
"185.164.100.3"
"185.164.100.4"
"185.164.100.5"
)for t in "${TARGETS[@]}"
do
for a in "${myIPs[@]}"
do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
wait
done
doneI want this code to start with echo commands for each IP in TARGETS executing them in parallel. At the same time the script is not meant to proceed with echo commands for more than one address in myIPs simulteously, hence I introduced wait in the internal loop.
I want to have pairs of echo (each for the port 80 and 443) executed in parallel for each target in TARGETS. In other words I want to accomplish this (but sadly it does not work):
for t in "${TARGETS[@]}"
do &
for a in "${myIPs[@]}"
do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
wait
done done
wait Yet, because it would increase my load averages too much, I do not want this: :
for t in "${TARGETS[@]}"
do
for a in "${myIPs[@]}"
do
echo "${a} ${t} -p 80" >>log 2>&1 &
echo "${a} ${t} -p 443" >>log 2>&1 &
done
done
wait How might I accomplish my objective?
P.S. This is just a snippet of a more complex script. I wanted isolate the relevant issue, hence the use of echo instead of one of the networking commands.
| How might I execute this nested for loop in parallel? |
One way would be to create the shell input for all the jobs:
for file in *.pdf
do
printf 'pdftoppm -tiff -f 1 -l 2 "%q" ~/tiff/directory/"%q"/"%q"' \
"$file" "$file" "$file"
doneand then pipe that to parallel -j N where N is the number of jobs you want to run simultaneously:
for file in *.pdf
do
printf 'pdftoppm -tiff -f 1 -l 2 "%q" ~/tiff/directory/"%q"/"%q"' \
"$file" "$file" "$file"
done |
parallel -j 8 |
I have this command here for batch converting PDF documents (first 2 pages) to TIFF files using pdftoppm.
The goal is to put the TIFF images into its own folder with folder name matching the original PDF file name.
for file in *.pdf; do
pdftoppm -tiff -f 1 -l 2 "$file" ~/tiff/directory/"$file"/"$file"
doneHow can I run 8 instances of the pdftoppm command concurrently?
I am running Debian.
I have 10000s of PDFs to convert to TIFF.
| How to run PDF to TIFF conversion in parallel? |
watch bjobs will run and update the output for display every two seconds (by default).
|
When using LSF command bjobs, I would like to get instantly changing output if I submit another job, because I feel stressful to run the same command again and again. I would like something like top refreshing the output of the list of processes.In top that is not needed, it autorefreshes again and again.
I would like to auto-refresh the output of the bjobs command automatically.
| Live changing bjobs output |
parallel output is sequential because it captures the processes output and prints it only when that process is finished, unlike xargs which let the processes print the output immediately.
From man parallel
GNU parallel makes sure output from the commands is the same output as
you would get had you run the commands sequentially. This makes it
possible to use output from GNU parallel as input for other programs. |
I am trying to download multiple files parallelly in bash and I came across GNU parallel. It looks very simple and straight forward. But I am having a hard time getting GNU parallel working. What am I doing wrong? Any pointers are appreciated. As you can see the output is very sequential and I expect output to be different each time. I saw a similar question in SO (GNU parallel not working at all) but that solutions mention there did not work for me.svarkey@svarkey-Precision-5510:~$ seq 1 3 | xargs -I{} -n 1 -P 4 kubectl version --short=true --context cs-prod{} --v=6
I0904 11:33:10.635636 24861 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:10.640718 24863 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:10.640806 24862 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:11.727974 24863 round_trippers.go:443] GET https://kube-api.awsw3.cld.dtvops.net/version?timeout=32s 200 OK in 1086 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:11.741985 24861 round_trippers.go:443] GET https://kube-api.awsw1.cld.dtvops.net/version?timeout=32s 200 OK in 1105 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:11.859882 24862 round_trippers.go:443] GET https://kube-api.awsw2.cld.dtvops.net/version?timeout=32s 200 OK in 1218 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
svarkey@svarkey-Precision-5510:~$ seq 1 3 | parallel -j 4 -I{} kubectl version --short=true --context cs-prod{} --v=6
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:18.584076 24923 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:19.662197 24923 round_trippers.go:443] GET https://kube-api.awsw1.cld.dtvops.net/version?timeout=32s 200 OK in 1077 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:18.591033 24928 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:19.691343 24928 round_trippers.go:443] GET https://kube-api.awsw3.cld.dtvops.net/version?timeout=32s 200 OK in 1099 milliseconds
Client Version: v1.18.7
Server Version: v1.14.6
I0904 11:33:18.591033 24924 loader.go:375] Config loaded from file: /home/svarkey/.kube/config
I0904 11:33:19.775152 24924 round_trippers.go:443] GET https://kube-api.awsw2.cld.dtvops.net/version?timeout=32s 200 OK in 1183 milliseconds
svarkey@svarkey-Precision-5510:/tmp/parallel-20200822$ parallel --version
GNU parallel 20200822
Copyright (C) 2007-2020 Ole Tange, http://ole.tange.dk and Free Software
Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
GNU parallel comes with no warranty.Web site: https://www.gnu.org/software/parallel | Bash parallel command is executing commands sequentially |
For part of the answer you want this, correct?
$ parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50}
0 100
1 150
2 200
3 250
0 300
1 350
2 400
3 450If so, one way to do what I think you want would be
$ parallel -k echo {1} {2} ::: {10..20..10} ::: "$(parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50})"
10 0 100
10 1 150
10 2 200
10 3 250
10 0 300
10 1 350
10 2 400
10 3 450
20 0 100
20 1 150
20 2 200
20 3 250
20 0 300
20 1 350
20 2 400
20 3 450Another way would be (with a sort thrown in to show it in the order you want; it wouldn't be necessary in the actual run):
$ parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50} | parallel -a- echo {2} {1} ::: {10..20..10} | sort -k 1,1 -k3,3 -k2,2
10 0 100
10 1 150
10 2 200
10 3 250
10 0 300
10 1 350
10 2 400
10 3 450
20 0 100
20 1 150
20 2 200
20 3 250
20 0 300
20 1 350
20 2 400
20 3 450Yet another way would be to have parallel invoke parallel:
$ parallel parallel --link --arg-sep ,,, echo {1} ,,, {0..3} ,,, {100..450..50} ::: {10..20..10}
10 0 100
10 1 150
10 2 200
10 3 250
10 0 300
10 1 350
10 2 400
10 3 450
20 0 100
20 1 150
20 2 200
20 3 250
20 0 300
20 1 350
20 2 400
20 3 450This works because the “inner” parallel uses commas instead of colons for argument separators, so the “outer”parallel doesn't “see” the linked argument.
While I was working on a way to make that more understandable (there's an assumed ‘{}’ in there) I realized that that last example wouldn't exactly work for you, because the 2nd and 3rd arguments are one string. So I added the clarification, and (yet another!) parallel, to demonstrate how you'd run your Python simulator.
$ parallel parallel --link --arg-sep ,,, -I [] echo {1} [] ,,, {0..3} ,,, {100..450..50} ::: {10..20..10} | parallel -C' ' echo foo {1} bar {2} blat {3}
foo 10 bar 0 blat 100
foo 10 bar 1 blat 150
foo 10 bar 2 blat 200
foo 10 bar 3 blat 250
foo 10 bar 1 blat 350
foo 10 bar 0 blat 300
foo 10 bar 2 blat 400
foo 10 bar 3 blat 450
foo 20 bar 0 blat 100
foo 20 bar 1 blat 150
foo 20 bar 2 blat 200
foo 20 bar 3 blat 250
foo 20 bar 0 blat 300
foo 20 bar 1 blat 350
foo 20 bar 2 blat 400
foo 20 bar 3 blat 450For any enumerated list of values
$ parallel parallel --link --arg-sep ,,, -I [] echo {1} [] ,,, {0..3} ,,, v0.0 v0.1 v0.2 v0.3 v1.0 v1.1 v1.2 v1.3 ::: {10..20..10} | parallel -C' ' echo power {1} seed {2} num {3}
power 20 seed 0 num v0.0
power 20 seed 1 num v0.1
power 20 seed 2 num v0.2
power 20 seed 3 num v0.3
power 20 seed 0 num v1.0
power 20 seed 1 num v1.1
power 20 seed 2 num v1.2
power 20 seed 3 num v1.3
power 10 seed 0 num v0.0
power 10 seed 1 num v0.1
power 10 seed 2 num v0.2
power 10 seed 3 num v0.3
power 10 seed 0 num v1.0
power 10 seed 1 num v1.1
power 10 seed 2 num v1.2
power 10 seed 3 num v1.3This is getting to be a very long answer. I think maybe you want something more like this, where 1 through 12 (number of powers times number of seeds) are the unique values for each combination of power and seed, and could be an enumerated list of values rather than {1..12}? Note I'm linking power and seed rather than num and seed.
$ parallel --link echo {1} {2} ::: "$(parallel echo {1} {2} ::: {10..30..10} ::: {0..3})" ::: {1..12} | parallel -C' ' echo run-sim --power {1} --seed {2} --num {3}
run-sim --power 10 --seed 0 --num 1
run-sim --power 10 --seed 1 --num 2
run-sim --power 10 --seed 2 --num 3
run-sim --power 10 --seed 3 --num 4
run-sim --power 20 --seed 0 --num 5
run-sim --power 20 --seed 1 --num 6
run-sim --power 20 --seed 2 --num 7
run-sim --power 20 --seed 3 --num 8
run-sim --power 30 --seed 0 --num 9
run-sim --power 30 --seed 1 --num 10
run-sim --power 30 --seed 2 --num 11
run-sim --power 30 --seed 3 --num 12 |
I made a Python simulator that runs on the basis of user-provided arguments. To use the program, I run multiple random simulations (controlled with a seed value). I use GNU parallel to run the simulator with arguments in a similar manner as shown below:
parallel 'run-sim --seed {1} --power {2}' ::: <seed args> ::: <power args>
Now, there is a third argument --num that I want to use, but want to link that argument with the seed value. So that, for every seed value only one num value is used. However, the same num argument should not be used with every power value.
In a nutshell, this table should make you understand better:
| Power | Seed | num |
|:-----------|------------:|:------------:|
| 10 | 0 | 100 |
| 10 | 1 | 150 |
| 10 | 2 | 200 |
| 10 | 3 | 250 |
|:-----------|------------:|:------------:|
| 20 | 0 | 300 |
| 20 | 1 | 350 |
| 20 | 2 | 400 |
| 20 | 3 | 450 |
....(The table format may not be suitable for mobile devices)
If I were to write the above implementation using a for loop, I would do something like:
for p in power:
for s, n in (seed, num[p])
simulate(p, s, n)Where power is a 1D array, seed is a 1D array and num is a 2D array where a row depicts the corresponding num values for a power p.
My Solution:
Use multiple parallel statements for each power value, and use the --link parameter of parallel to bind the seed and num arguments.
parallel --link 'run-sim --seed {1} --num {2} --power 10' ::: 0 1 2 3 ::: 100 150 200 250
parallel --link 'run-sim --seed {1} --num {2} --power 20' ::: 0 1 2 3 ::: 300 350 400 450
...The problem with this solution would be that I would have to limit the number of jobs for each statement based upon the number of power values. My computer can handle 50 extra processes before going into cardiac arrest, therefore for 3 power values, I would have to limit the jobs for each statement to 12.
What I am looking for
A single liner such that I don't have to run multiple parallel statements and fix the number of jobs to 50.
| GNU Parallel linking arguments with alternating arguments |
With GNU Parallel you should be able to do this:
parallel --tty -j0 ::: 'openocd -f connect_swo.cfg' 'python3 swo_parser.py'If GNU Parallel is not already installed look at: https://oletange.wordpress.com/2018/03/28/excuses-for-not-installing-gnu-parallel/
|
What I would like to achieve is a bash script or even better a single bash line that can run two terminal based apps on parallel. I am aware of the commands & and ; but in my case they are not applicable because both my commands keep the terminal open and need each other to run properly.
It might seem like an edge case but my specific use case is quite simple and I think it might be help full in many similar cases.
What I am trying to do is to parse a message from a usb port that uses swo protocol ,so my quite obnoxious workaround is :Open terminal one , run openocd -f connect_swo.cfg (terminal1 hangs )
Open terminal two , run python3 swo_parser.py (terminal2 hangs and terminal1 prints values)
Then terminating both commands with two separate ctrl+c signalsExpected solution would be something like:Run magic command that open two connected sessions and both my commands on the seperate sessions
One single ctrl +c terminates both commandsPS:Comment me if I should move the question to superuser
| Terminal command follows the lifetime of another terminal command |
parallel echo HaploSample.chr{1}{2}{3}.raw.vcf ::: 1 2 3 4 5 6 7 ::: A B D ::: _part1 _part2 |
Currently I have the following script for using the HaploTypeCaller program on my Unix system on a repeatable environment I created:
#!/bin/bash
#parallel call SNPs with chromosomes by GATK
for i in 1 2 3 4 5 6 7
do
for o in A B D
do
for u in _part1 _part2
do
(gatk HaplotypeCaller \
-R /storage/ppl/wentao/GATK_R_index/genome.fa \
-I GATK/MarkDuplicates/ApproachBsortedstettler.bam \
-L chr$i$o$u \
-O GATK/HaplotypeCaller/HaploSample.chr$i$o$u.raw.vcf &)
done
done
donegatk HaplotypeCaller \
-R /storage/ppl/wentao/GATK_R_index/genome.fa \
-I GATK/MarkDuplicates/ApproachBsortedstettler.bam \
-L chrUn \
-O GATK/HaplotypeCaller/HaploSample.chrUn.raw.vcf&How can I change this piece of code to parallel at least partially?
Is it worth to do I am trying to incorporate this whole script in a different script that you can see on a different question here
should I?
Will I get quite the boost on performance?
| Converting for loops on script that is called by another script into GNU parallel commands |
You can first put all parameters in a file and then use
parallel -a filename command
For example:
echo "--fullscreen $(find /tmp -name *MAY*.pdf) $(find /tmp -name *MAY*.pdf).out" >> /tmp/a
echo "--page-label=3 $(find /tmp -name *MAY*.pdf) $(find /tmp -name *JUNE*.pdf).out" >> /tmp/a
echo "--fullscreen $(find /tmp -name *MAY*.pdf) $(find /tmp -name *JULY*.pdf).out" >> /tmp/a Then run the command:
parallel -a /tmp/a evince |
I want to run a task where I specify two commands which will be run in an alternating fashion with different parameters. E.g:
1. exec --foo $inputfile1 $inputfile.outfile1
2. exec --bar $inputfile2 $inputfile.outfile2
3. exec --foo $inputfile3 $inputfile.outfile3
4. exec --bar $inputfile4 $inputfile.outfile4I could probably get away with specyfing two parallel commands or with specyfying two inputs but I need something more universal. The files will be specified using pipelined "find" command.
EDIT:
My command for one action would look like this:
find . -name 'somefiles*' -print0 | parallel -0 -j10 --verbose 'exec --foo {} {.}.outfile' I just do not know how to do this in alternate fashion between two commands
So basically what I need parallel -j10 to do is to run 5 of this commands with foo parameter and 5 of them with bar parameter on a single set of files. I could probably get away with it not being alternating but i want parallel to take care about it being exactly 5/5 split so I don't end with more foos or more bars
| GNU Parallel alternating jobs |