output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Probably a better way to do this but I've come up with this solution which converts the number to decimal and then back to hex (and manually adds the 0x):
printf '0x%x\n' "$((16#00080000))"Which you could write as:
printf '0x%x\n' "$((16#$(expr substr "$SUPERBLOCK" 64 8)))" |
I have the bash line:
expr substr $SUPERBLOCK 64 8Which is return to me string line:
00080000I know that this is, actually, a 0x00080000 in little-endian. Is there a way to create integer-variable from it in bash in big-endian like 0x80000?
| How to read string as hex number in bash? |
If that list was in a file, one per line, I'd do something like:
sort -nu file |
awk 'NR == FNR {rank[$0] = NR; next}
{print rank[$0]}' - fileIf it was in a zsh $array:
sorted=(${(nou)array})
for i ($array) echo $sorted[(i)$i]That's the same principle as for the awk version above, the rank is the index NR/(i) in the numerically (-n/(n)) ordered (sort/(o)), uniqued (-u/(u)) list of elements.
For your average rank:
sort -n file |
awk 'NR == FNR {rank[$0] += NR; n[$0]++; next}
{print rank[$0] / n[$0]}' - fileWhich gives:
5
7
1
6
2.5
2.5
4(use sort -rn to reverse the order like in your Google Spreadsheet version).
|
I am thinking if there exists any name for such a simple function which returns the order of numbers in an array.
I would really love to do this ranking by minimalist way and with basic Unix commands but I cannot get anything to my mind than basic find-and-loop which is not so elegant.
Assume you have an array of numbers
17
94
3
52
4
4
9Expected output where duplicates just receive the same ID; how to handle duplicates is not critical so feel to take shortcuts:
4
6
1
5
2
2
3 Motivation: I saw today many users using many different ways to solve this problem and doing much manual steps with Spreadsheet; so I started to think the minimalist way to do it.
Comparing the ranking algorithm to Google's Average ranking
In Google Spreadsheet, do =arrayformula(rank.AVG(A:A,A:A,true)) and you get as a benchmark as ascending order like the first expected output
17 5
94 7
3 1
52 6
4 2.5
4 2.5
9 4where you see that my initial ranking algorithm is biased.
I think to be able to set the dataset location would be helpful here.
| How to rank numbers in array by Unix? |
Conclusion of the various comments seems to be that the simplest answer to the original question is
if ! (( $COUNTER % 5 )) ; then |
Here is my code; I want to compare $COUNTER to various multiple times.
if [ "$COUNTER" = "5" ]; thenIt's okay, but I want it do it for dynamic times like 5,10,15,20 etc.
| Compare bash variable to see if divisible by 5 |
2.3e-12 would be understood as 2 in a locale where the decimal radix character is , (as it is in most of the non-English speaking world including your de_DE.utf8) where the number would need to be written 2,3e-12.
You could do:
LC_ALL=C sort -grk2 < your-fileTo force numbers being interpreted in the English style.
In the C locale (the only one you would be guaranteed to find on any system), the decimal radix is . (conveniently for your input).
Note that sort has nothing to do with bash, it's a separate command. The -g option is a non-standard extension of the GNU implementation of sort.
|
I am trying to sort a data file in descending order. The data file is given by three columns delimited by tabs; I want to order them in descending order for the third column with (the third column is given as a scientific notation in exponential value):
cat eII_surf.txt | sort -gr -k3Somehow, this worked on a previous machine, but my new one does not seem to do the trick at all.
Here a simple example:
cat test.txt:
6.7 2.3e-12
5.0 3.4e-18
4.5 5.6e-16
4.2 2.1e-15
4.0 2.9e-17
2.4 2.5e-15
1.0 1.0e-17
0.5 1.0e-18and cat test.txt | sort -gr -k2:
4.5 5.6e-16
5.0 3.4e-18
6.7 2.3e-12
4.2 2.1e-15
4.0 2.9e-17
2.4 2.5e-15
1.0 1.0e-17
0.5 1.0e-18This is the output of locale:
LANG=en_US.utf8
LC_CTYPE="en_US.utf8"
LC_NUMERIC=de_DE.utf8
LC_TIME=de_DE.utf8
LC_COLLATE="en_US.utf8"
LC_MONETARY=de_DE.utf8
LC_MESSAGES="en_US.utf8"
LC_PAPER=de_DE.utf8
LC_NAME="en_US.utf8"
LC_ADDRESS="en_US.utf8"
LC_TELEPHONE="en_US.utf8"
LC_MEASUREMENT=de_DE.utf8
LC_IDENTIFICATION="en_US.utf8"
LC_ALL= | "sort -g" does not work as expected on data in scientific notation |
Here’s a generic variant:
BEGIN { OFS = FS = "," }{
for (i = 1; i <= NF; i++) sum[i] += $i
count++
}count % 3 == 0 {
for (i = 1; i <= NF; i++) $i = sum[i] / count
delete sum
count = 0
if ($NF >= 1.1 * last || $NF <= 0.9 * last) {
print
last = $NF
}
}END {
if (count > 0) {
for (i = 1; i <= NF; i++) $i = sum[i] / count
if ($NF >= 1.1 * last || $NF <= 0.9 * last) print
}
}I’m assuming that left-overs should be handled in a similar fashion to blocks of N lines.
|
I've reviewed the "Similar questions", and none seem to solve my problem:
I have a large CSV input file; each line in the file is an x,y data point. Here are a few lines for illustration, but please note that in general the data are not monotonic:
1.904E-10,2.1501E+00
3.904E-10,2.1827E+00
5.904E-10,2.1106E+00
7.904E-10,2.2311E+00
9.904E-10,2.2569E+00
1.1904E-09,2.3006E+00 I need to create an output file that is smaller than the input file. The output file will contain no more than one line for every N lines in the input file. Each single line in the output file will be a x,y data point which is the average of the x,y values for N lines of the input file.
For example, if the total number of lines in the input file is 3,000, and N=3, the output file will contain no more than 1,000 lines. Using the data above to complete this example, the first 3 lines of data above would be replaced with a single line as follows:
x = (1.904E-10 + 3.904E-10 + 5.904E-10) / 3 = 3.904E-10
y = (2.1501E+00 + 2.1827E+00 + 2.1106E+00) / 3 = 2.1478E+00, or:
3.904E-10,2.1478E+00 for one line of the output file.
I've fiddled with this for a while, but haven't gotten it right. This is what I've been working with, but I can't see how to iterate the NR value to work through the entire file:
awk -F ',' 'NR == 1, NR == 3 {sumx += $1; avgx = sumx / 3; sumy += $2; avgy = sumy / 3} END {print avgx, avgy}' CB07-Small.csvTo complicate this a bit more, I need to "thin" my output file still further:
If the value of avgy (as calculated above) is close to the last value of avgy in the output file, I will not add this as a new data point to the output file. Instead I will calculate the next avgx & avgy values from the next N lines of the input file. "Close" should be defined as a percentage of the last value of argy. For example:if the current calculated value of avgy differs by less than 10% from the last value of avgy recorded in the output file, then do not write a new value to the output file.see edit history
| Can `awk` sum a column over a specified number of lines |
If you want to convert only specific column, awk helps
$ cat ip.txt
foo 64651235465131648624672951975 123
bar 3452356235235235 xyz
baz 234325236452352352345234532 ijkls$ # change only second column
$ awk '{$2 = sprintf("%.3e", $2)} 1' ip.txt
foo 6.465e+28 123
bar 3.452e+15 xyz
baz 2.343e+26 ijkls |
I am working on a file where I have columns of very large values.
(like 40 digits : 646512354651316486246729519751795724672467596754627.06843
and so on ...)
I would like to have those numbers in scientific notation but with only 3 or 4 numbers after the dot. Is there a way to use sed or something to do that on every number that appears in my file?
| How to convert big numbers into scientific notation |
Pass the price through tonumber:
curl -sS 'https://api.binance.com/api/v1/ticker/price?symbol=BTCUSDT' |
jq -r '.price | tonumber'This would convert the price from a string to a number, removing the trailing zeros. See the manual for jq.
|
The following command achieve my goal by grepping BTC price from specific exchange.
curl -sS https://api.binance.com/api/v1/ticker/price?symbol=BTCUSDT | jq -r '.price'the output will be for the moment 7222.25000000 but i would like to get it 7222.25
| Trim trailing zeroes off a number extracted by jq |
Update: as of zsh v. 5.1, the printf builtin supports grouping of thousands via ' just like bash/coreutils printf (see also the discussion here).The thousands separator is a GNU extension that zsh doesn't support, and it has its own printf builtin that you end up using instead. As mentioned in the linked post, you can get the locale-dependant thousands separator with:
zmodload zsh/langinfo
echo $langinfo[THOUSEP]If you need to use zsh specifically and exclusively, you can use that with sed.
Probably easier will be to use the non-builtin printf from GNU coreutils instead, which will permit the thousands separator option if your system does:
$ command printf "%'d\n" 1234567890
1,234,567,890command printf tells the shell not to use a builtin or alias, but to look up the command in $PATH.
|
I have been trying to define a thousands separator in printf for a while now and I have discovered that zsh has some problems with it.
In bash I can use something like:
$ printf "%'d\n" 1234567890
1,234,567,890but in zsh it won't work :
$ printf "%'d\n" 1234567890
printf: %': invalid directiveI have just found out that coreutils printf will do it just fine:
$/usr/bin/printf "%'d\n" 1234567890
1,234,567,890How can I use thousands separator in zsh?
$ zsh --version
zsh 5.0.2 (x86_64-pc-linux-gnu) | Thousands separator in printf in zsh |
sed solution. Maybe it too tricky and unoptimal, but it works. As an experiment :).
It do all replacements in the one sed call by executing the one, big command sequence, generated by printf and paste usage. I wanted split this command to the multiline for readability, but couldn't - it stops working then. So - the oneliner:
sed -i -r "$(paste -d'/' <(printf 's/%s\\b\n' G{1..229}) <(printf '%s/g\n' G{230..458}))" file.txtIt is converting to the following sed command:
sed -i -r "s/G1\b/G230/g
s/G2\b/G231/g
s/G3\b/G232/g
s/G4\b/G233/g
...
s/G227\b/G456/g
s/G228\b/G457/g
s/G229\b/G458/g" file.txtExplanationsed -i -r "$(
paste -d'/' - joins left and right parts (which are generated in 3,4 steps) by the slash - / and the result is this: s/G1\b/G230/g
<(printf 's/%s\\b\n' G{1..229}) - makes left parts of the sed substitute command. Example: s/G1\b, s/G2\b, s/G3\b, so on.\b - Matches a word boundary; that is it matches if the character to the left is a “word” character and the character to the right is a “non-word” character, or vice-versa. Information - GNU sed, regular expression extensions.<(printf '%s/g\n' G{230..458}) - makes right parts of the sed substitute command. Example: G230/g, G231/g, G232/g, so on.
)" file.txt - input file.Testing
Input
var G1 = value;
G3 = G1 + G2;
G3 = G1 + G2
G3 = ${G1} + G2
var G2 = value;
var G3 = value;
G224 = G3 + G215;
G124 = G124 + G215;
G124 = G124 + G12;
var G4 = value;
var G5 = value;
var G6 = value;
var G59 = value;
var G60 = value;
var G156 = value;
var G227 = value;
var G228 = value;
var G229 = value;Output
var G230 = value;
G232 = G230 + G231;
G232 = G230 + G231
G232 = ${G230} + G231
var G231 = value;
var G232 = value;
G453 = G232 + G444;
G353 = G353 + G444;
G353 = G353 + G241;
var G233 = value;
var G234 = value;
var G235 = value;
var G288 = value;
var G289 = value;
var G385 = value;
var G456 = value;
var G457 = value;
var G458 = value; |
I have a file which has variables from G1 to G229. I want to replace them with G230 to G469; how I can do that? I tried this bash script but it didn't work:
#!/bin/bashfor num in {1..229}
do
echo G$num
N=$(($num+229))
echo G$N
sed -i -e 's/$G$num/$G$N/g' file
done | how to replace a value with that value + constant |
In bash printf is able to use the %f format
#!/bin/bash
for a in 531.125 531.4561 531.3518 531.2; do
printf "%8.4f\n" "$a"
doneExecuted gives:
531.1250
531.4561
531.3518
531.2000 |
I have a command that outputs a number to a log file, and I don't like the way it looks when the number changes the number of decimal places because it ruins the alignment and makes everything look messy. How do I force the output to have the same number of decimal places each time?
ex:
531.125
531.4561
531.3518
531.2should be:
531.1250
531.4561
531.3518
531.2000Thanks!
| Formatting numerical output in bash to have exactly 4 decimal places |
I ended up with this:
awk -v col=$col '
typeof($col) != "strnum" {
print "Error on line " NR ": " $col " is not numeric"
noprint=1
exit 1
}
{
sum+=$col
}
END {
if(!noprint)
print sum
}' $fileThis uses typeof, which is a GNU awk extension. typeof($col) returns 'strnum' if $col is a valid number, and 'string' or 'unassigned' if it is not.
See Can I determine type of an awk variable?
|
I have a program that sums a column in a file:
awk -v col=2 '{sum+=$col}END{print sum}' input-fileHowever, it has a problem: If you give it a file that doesn't have numeric data, (or if one number is missing) it will interpret it as zero.
I want it to produce an error if one of the fields cannot be parsed as a number.
Here's an example input:
bob 1
dave 2
alice 3.5
foo barI want it to produce an error because 'bar' is not a number, rather than ignoring the error.
| Make awk produce error on non-numeric |
You can take help from paste to serialize the numbers in a format suitable for bc to do the addition:
% grep "30201" logfile.txt | cut -f6 -d "|"
650
1389
945% grep "30201" logfile.txt | cut -f6 -d "|" | paste -sd+
650+1389+945% grep "30201" logfile.txt | cut -f6 -d "|" | paste -sd+ | bc
2984If you have grep with PCRE, you can do it with grep alone using postive lookbehind:
% grep -Po '\|30201\|.*\|\K\d+' logfile.txt | cut -f6 -d "|" | paste -sd+ | bc
2984With awk alone:
% awk -F'|' '$3 == 30201 {sum+=$NF}; END{print sum}' logfile.txt
2984-F'|' sets the field separator as |
$3 == 30201 {sum+=$NF} adds up the last field's values if the third field is 30201
END{print sum} prints the sum at the END |
I have a log file. For every line with a specific number, I want to sum the last number of those lines. To grep and cut is no problem but I don't know how to sum the numbers. I tried some solutions from StackExchange but didn't get them to work in my case.
This is what I have so far:
grep "30201" logfile.txt | cut -f6 -d "|"30201 are the lines I'm looking for.
I want to sum the last numbers 650, 1389 and 945
The logfile.txt
Jan 09 2016|09:15:17|30201|1|SL02|650
Jan 09 2016|09:15:18|43097|1|SL01|945
Jan 09 2016|09:15:19|28774|2|SB03|1389
Jan 09 2016|09:16:21|00788|1|SL02|650
Jan 09 2016|09:17:25|03361|3|SL01|945
Jan 09 2016|09:17:33|08385|1|SL02|650
Jan 09 2016|09:18:43|10234|1|SL01|945
Jan 09 2016|09:21:55|00788|1|SL02|650
Jan 09 2016|09:24:43|03361|3|SB03|1389
Jan 09 2016|09:26:01|30201|1|SB03|1389
Jan 09 2016|09:26:21|28774|2|SL02|650
Jan 09 2016|09:26:25|00788|1|SL02|650
Jan 09 2016|09:27:21|28774|2|SL02|650
Jan 09 2016|09:29:32|30201|1|SL01|945
Jan 09 2016|09:30:12|34032|1|SB03|1389
Jan 09 2016|09:30:15|08767|3|SL02|650 | How to grep and cut numbers from a file and sum them |
You're reaching the limit of the precision of awk numbers.
You could force the comparison to be a string comparison with:
awk -v num1=59558711052462309110012 -v num2=59558711052462309110011 '
BEGIN{ print (num2""==num1) ? "equal" : "not equal" }'(Here the concatenation with the empty string forces them to be considered as strings instead of numbers).
If you want to do numerical comparison, you'll have to use a tool that can work with arbitrary precision numbers like bc or python.
|
This example works fine:
awk -v num1=5999 -v num2=5999 'BEGIN{ print (num2==num1) ? "equal" : "not equal" }'
equalThis example does not work well:
awk -v num1=59558711052462309110012 -v num2=59558711052462309110011 'BEGIN{ print (num2==num1) ? "equal" : "not equal" }'
equalIn the second example compared numbers are different. Why not print "not equal"?
| How to compare two numbers in awk? |
You were fairly close.
You see what you were doing wrong, don't you?
You were keeping one total for each column1 value,
when you should have been keeping three.
This is similar to Inian's answer,
but trivially extendable to handle any number of columns:
awk -F"\t" '{for(n=2;n<=NF; ++n) a[$1][n]+=$n}
END {for(i in a) {
printf "%s", i
for (n=2; n<=4; ++n) printf "\t%s", a[i][n]
printf "\n"
}
}'Rather than keep three arrays, like Inian's answer,
it keeps a two-dimensional array.
| I have a table data like below
abc 1 1 1
bcd 2 2 4
bcd 12 23 3
cde 3 5 5
cde 3 4 5
cde 14 2 25I want the sum of values in each column based on variables in first column and desired result is like below:
abc 1 1 1
bcd 14 25 7
cde 20 11 35I used awk command like this
awk -F"\t" '{for(n=2;n<=NF; ++n)a[$1]+=$n}END{for(i in a ) print i, a[i] }' tablefilepathand I got a result below:
abc 3
bcd 46
cde 66I think the end of my code is wrong but don't know how to fix it.
I need some directions to fix the code.
| How to get sum of values in column based on variables in other column separately? [duplicate] |
You are intending to sort by column 7 numerically.
This can be done with either
$ sort -n -k 7 file
id - 884209 , researchers - 1
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 916197 , researchers - 1
id - 896781 , researchers - 4
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 910480 , researchers - 10
id - 901026 , researchers - 15or with
$ sort -k 7n file
id - 884209 , researchers - 1
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 916197 , researchers - 1
id - 896781 , researchers - 4
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 910480 , researchers - 10
id - 901026 , researchers - 15These are equivalent.
The -n option specifies numerical sorting (as opposed to lexicographical sorting). In the second example above, the n is added as a specifier/modifier to the 7th column specifically.
The specification of the sorting key column, -k 7, will make sort sort the lines on column 7 onwards (the line from column 7 to the end). In this case, since column 7 is last, it mean just this column. If this had mattered, you may have wanted to use -k 7,7 instead ("from column 7 to 7").
If two keys compare equal, sort will use the complete line as the sorting key, which is why we got the result we get for the first four lines in your example. If you had wanted to do a secondary sort on the second column, you would have used sort -n -k 7,7 -k 2,2, or sort -k 7,7n -k 2,2n (specifying the type of comparison separately for each column). Again, if the 7th and the 2nd columns compare the same between two lines, sort would have used a lexicographical comparison of the complete lines.To sort numerically on character position 29, which corresponds to the first digit of the numerical values at the end of each line in your example data:
$ sort -k 1.29n file
id - 884209 , researchers - 1
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 916197 , researchers - 1
id - 896781 , researchers - 4
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 910480 , researchers - 10
id - 901026 , researchers - 15The -k 1.29n means "sort on the key given by the 29th character of the 1st field (onwards, to the end of the line), numerically".
The -k 7,7n used in the text above just happens to be equivalent to -k 7.1,7.1n.
|
I'm trying to sort a file based on a particular position but that does not work, here is the data and output.
~/scratch$ cat id_researchers_2018_sample
id - 884209 , researchers - 1
id - 896781 , researchers - 4
id - 901026 , researchers - 15
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 910480 , researchers - 10
id - 916197 , researchers - 1
~/scratch$ sort -k 28,5 id_researchers_2018_sample
id - 884209 , researchers - 1
id - 896781 , researchers - 4
id - 901026 , researchers - 15
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 910480 , researchers - 10
id - 916197 , researchers - 1I'm wanting to sort this by the numbers in the last column, like this:
id - 884209 , researchers - 1
id - 904091 , researchers - 1
id - 905525 , researchers - 1
id - 916197 , researchers - 1
id - 896781 , researchers - 4
id - 908660 , researchers - 5
id - 908876 , researchers - 7
id - 910480 , researchers - 10
id - 901026 , researchers - 15 | Sort numerical column |
This is related to the generalised strnum handling in version 4.2 of GAWK.
Input values which look like numbers are treated as strnum values, represented internally as having both string and number types. “0123” qualifies as looking like a number, so it is handled as a strnum. strtonum is designed to handle both string and number inputs; it looks for numbers first, and when it encounters an input number, returns the number without transformation:
NODE *
do_strtonum(int nargs)
{
NODE *tmp;
AWKNUM d; tmp = fixtype(POP_SCALAR());
if ((tmp->flags & NUMBER) != 0)
d = (AWKNUM) tmp->numbr;
else if (get_numbase(tmp->stptr, tmp->stlen, use_lc_numeric) != 10)
d = nondec2awknum(tmp->stptr, tmp->stlen, NULL);
else
d = (AWKNUM) force_number(tmp)->numbr; DEREF(tmp);
return make_number((AWKNUM) d);
}Thus “0123” becomes the number 123, and strtonum returns that directly.
“0x123” doesn’t look like a number (by the rules defined in the link given above), so it is handled as a string and processed as you’d expect by strtonum.
A number is defined as follows in AWK:The input string is decomposed into two parts: an initial, possibly empty, sequence of white-space characters (as specified by isspace()) and a subject sequence interpreted as a floating-point constant.
The expected form of the subject sequence is an optional '+' or '-' sign, then a non-empty sequence of digits optionally containing a <period>, then an optional exponent part. An exponent part consists of 'e' or 'E', followed by an optional sign, followed by one or more decimal digits.
The sequence starting with the first digit or the <period> (whichever occurs first) is interpreted as a floating constant of the C language, and if neither an exponent part nor a <period> appears, a is assumed to follow the last digit in the string. If the subject sequence begins with a <hyphen-minus>, the value resulting from the conversion is negated. |
According to $ man gawk, the strtonum() function can convert a string into a number:strtonum(str) Examine str, and return its numeric value. If
str begins with a leading 0, treat it as an
octal number. If str begins with a leading 0x
or 0X, treat it as a hexadecimal number. Oth‐
erwise, assume it is a decimal number.And if the string begins with a leading 0, the number is treated as octal, while if it begins with 0x it's treated as hexadecimal.
I've run these commands to check my understanding of the function:
$ awk 'END { print strtonum("0123") }' <<<''
83$ awk 'END { print strtonum("0x123") }' <<<''
291The string "0123" is correctly treated as containing an octal number and converted into the decimal number 83.
Similarly, the string "0x123" is correctly treated as containing an hexadecimal number and converted into the decimal number 291.
Now, here's what happens if I run the same commands, but moving the numerical strings from the program text to the input data:
$ awk 'END { print strtonum($1) }' <<<'0123'
123$ awk 'END { print strtonum($1) }' <<<'0x123'
291I understand the second result which is identical as in the previous commands, but I don't understand the first one. Why does gawk now treat 0123 as a decimal number, even though it begins with a leading 0 which characterizes octal numbers?
I suspect it has something to do with the strnum attribute, because for some reason 1, gawk gives this attribute to 0123 but not to 0x123:
$ awk 'END { print typeof($1) }' <<<'0123'
strnum$ awk 'END { print typeof($1) }' <<<'0x123'
string1 It may be due to a variation between awk implementations:To clarify, only strings that are coming from a few sources (here quoting the
POSIX spec): [...] are to be considered a numeric string if their value happens
to be numerical (allowing leading and trailing blanks, with variations between
implementations in support for hex, octal, inf, nan...).I'm using gawk version 4.2.62, and the output of $ awk -V is:
GNU Awk 4.2.62, API: 2.0 (GNU MPFR 3.1.4, GNU MP 6.1.0) | Why does gawk treat `0123` as a decimal number when coming from the input data? |
You can do this using printf and bash:
printf '%08x\n' $(< test.txt)Or using printf and bc...just...because?
printf '%08s\n' $(bc <<<"obase=16; $(< test.txt)")In order to print the output to a text file just use the shell redirect > like:
printf '%08x\n' $(< test.txt) > output.txt |
I have a need to convert a list of decimal values in a text file into hex format, so for example test.txt might contain:
131072
196608
262144
327680
393216
...the output should be list of hex values (hex 8 digit, with leading zeroes):
00020000
00030000
00040000
...the output is printed into the text file. How to make this with python or linux shell script?
EDIT #1
I missed one extra operation: I need to add 80000000 hex to each of the created hex values. (arithmetic addition, to apply to already created list of hex values).
| Convert a list of decimal values in a text file into hex format |
grep works well for this:
$ echo "2.5 test. test -50.8" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?'
2.5
-50.8How it works-E
Use extended regex.
-o
Return only the matches, not the context
[+-]?[0-9]+([.][0-9]+)?+
Match numbers which are identified as:[+-]?
An optional leading sign
[0-9]+
One or more numbers
([.][0-9]+)?
An optional period followed by one or more numbers.Getting the output on one line
$ echo "2.5 test. test -50.8" | grep -Eo '[+-]?[0-9]+([.][0-9]+)?' | tr '\n' ' '; echo ""
2.5 -50.8 |
I am trying to extract numbers out of some text. Currently I am using the following:
echo "2.5 test. test -50.8" | tr '\n' ' ' | sed -e 's/[^0-9.]/ /g' -e 's/^ *//g' -e 's/ *$//g' | tr -s ' 'This would give me 2.5, "." and 50.8. How should I modify the first sed so it would detect float numbers, both positive and negative?
| Extracting positive/negative floating-point numbers from a string |
This quote is from memory and so probably not quite right but it conveys the essence of the problem: "Operating on floating point numbers is like moving piles of sand: every time you do it, you lose a little sand and you get a bit of dirt" (from Kernighan and Plauger's "Elements of programming style" IIRC). Every programming language has that problem.
| I found out the floating point multiplication in mit-scheme is not accurate, for example,
1 ]=> (* 1991.0 0.1)will produce
;Value: 199.10000000000002Could you please help explain the appearance of the weird trailing number “2”?
| Inaccurate multiplication in mit-scheme [closed] |
Using awk:
awk '{ for (i=1;i<=NF;i++) { if ($i ~ /num2=/) {sub(/num2=/, "", $i); $i="num2="$i-5; print} } }' fileThis will loop through each column of each line looking for the column that contains num2=. When it finds that column it will:Remove num2= - sub(/num2=/, "", $i)
Redefine that column as num2={oldnum-5} - $i="num2="$i-5
Print the line - print |
File contents:
RANDOM TEXT num1=400 num2=15 RANDOM TEXT
RANDOM TEXT num1=300 num2=10 RANDOM TEXT
RANDOM TEXT num1=200 num2=5 RANDOM TEXTI would like to subtract 5 for each num2 per line like so:
RANDOM TEXT num1=400 num2=10 RANDOM TEXT
RANDOM TEXT num1=300 num2=5 RANDOM TEXT
RANDOM TEXT num1=200 num2=0 RANDOM TEXTPure bash is preferred, but no biggie if another GNU tool does it better.
| How can I adjust nth number in a line? |
If you want to do this only using commonly-available tools (at least on Linux distributions), the most efficient way is probably to ask shuf:
shuf -i 1-139 -n 1519 -rThis produces 1519 numbers randomly chosen between 1 and 139.
To ensure that each place gets one person, shuffle 139 numbers first without repeating:
shuf -i 1-139
shuf -i 1-139 -n 1380 -rTo reduce the “first 139” effect (the first 139 people would all end up in different places), shuffle all this again:
(shuf -i 1-139; shuf -i 1-139 -n 1380 -r) | shuf |
This question is about generating random numbers between a range, which is fine, but it doesn't fit my case.
I'll explain in SQL terms because it seems to me that's easier to understand, though the question is about bash. My idea is with the results of the bash code build a SQL script.
I have two MySQL tables, one of people and one of places. Each record has an unique integer id which goes from 1 to 139 (places) and 1 to 1519 (people). They are linked to each other by a foreign key, meaning: a place can have many people, but a person can have only one place.
# 1-139 # 1-1519
place1 → person1
→ person2
→ person3
... and so onThe data I have right now is that in one place all the people are linked, and the rest of places without any.
The places are 139 and the people are 1519, so I have one place with 1519 people.
My goal is to distribute the people randomly to the places, and that each place has at least one person.
My code so far is this:
$ c=1519
$ while [[ $c -ne 0 ]]; do
x=$((shuf -i 1-139 -n 1))
[[ $x -gt 139 ]] && continue
echo $x
(( c-- ))
doneThis code generates 1519 random numbers between 1-139, so now I can have each person linked to a random place.
My questions are:Is there a more efficient way to accomplish this?
How can I control that each place has at least one person?I prefer to do this in bash, but I'm open to other solutions not involving it.
| Random number output between two ranges linked together |
... | awk '{ sum+=$1} END { print sum/NR}'By default, (GNU) awk prints numbers with up to 6 significant digits (plus the exponent part). This comes from the default value of the OFMT variable. It doesn't say that in the docs, but this only applies to non-integer valued numbers.
You could change OFMT to affect all print statements, or rather, just use printf here, so it also works if the average happens to be an integer. Something like %.3f would print the numbers with three digits after the decimal point.
...| awk '{ sum+=$1} END { printf "%.3f\n", sum/NR }'See the docs for the meaning of the f and g, and the precision modifier (.prec in the second link):https://www.gnu.org/software/gawk/manual/html_node/Control-Letters.html
https://www.gnu.org/software/gawk/manual/html_node/Format-Modifiers.htmlawk 'NR == 1 { max=$1; min=$1; sum=0 } ...'This doesn't initialize NR. Instead, it checks if NR is equal to one, i.e. we're on the first line. (== is comparison, = is assignment.) If so, initializes max, min and sum. Without that, max and min would start as zeroes. You could never have a negative maximum value, or a positive minimum value.
|
I want to get the exact number when I try to find the average of a column of values.
For example, this is the column of input values:
1426044
1425486
1439480
1423677
1383676
1360088
1390745
1435123
1422970
1394461
1325896
1251248
1206005
1217057
1168298
1153022
1199310
1250162
1247917
1206836When I use the following command:
... | awk '{ sum+=$1} END { print sum/NR}'I get the following output: 1.31638e+06. However, I want the exact number, which is 1316375.05 or even better, in this format 1,316,375.05
How can I do this with command line tools only?
EDIT 1
I found the following one-liner awk command which will get me the max, min and mean:
awk 'NR == 1 { max=$1; min=$1; sum=0 } { if ($1>max) max=$1; if ($1<min) min=$1; sum+=$1;} END {printf "Min: %d\tMax: %d\tAverage: %.2f\n", min, max, sum/NR}'Why is it that NR must be initialized as 1? When I delete NR == 1, I get the wrong result.
EDIT 2
I found the following awk script from Is there a way to get the min, max, median, and average of a list of numbers in a single command?. It will get the sum, count, mean, median, max, and min values of a single column of numeric data, all in one go. It reads from stdin, and prints tab-separated columns of the output on a single line. I tweaked it a bit. I noticed that it does not need NR == 1 unlike the awk command above (in my first edit). Can someone please explain why? I think it has to do with the fact that the numeric data has been sorted and placed into an array.
#!/bin/shsort -n | awk ' $1 ~ /^(\-)?[0-9]*(\.[0-9]*)?$/ {
a[c++] = $1;
sum += $1;
}
END {
ave = sum / c;
if( (c % 2) == 1 ) {
median = a[ int(c/2) ];
} else {
median = ( a[c/2] + a[c/2-1] ) / 2;
} {printf "Sum: %d\tCount: %d\tAverage: %.2f\tMedian: %d\tMin: %d\tMax: %d\n", sum, c, ave, median, a[0], a[c-1]}
}
' | Number formatting and rounding issue with awk |
A better command to use for arbitrarily large numbers is bc. Here's a function to perform the conversion
hextodec() {
local hex="${1#0x}"
printf "ibase=16; %s\n" "${hex^^}" | bc
}hextodec 0x8110D248
2165363272I'm using a couple of strange-looking features here that manipulate the value of the variables as I use them:"${1#0x}" - This references "$1", the first parameter to the function, as you would expect. The # is a modifier (see man bash, for example, or read POSIX) that removes the following expression from the front of the value. For example, 0xab12 would be returned as ab12
"${hex^^}" - This references "$hex" but returns its value with alphabetic characters mapped to uppercase. (This is a bash extension, so read man bash but not POSIX.) For example, 12ab34 would be returned as 12AB34In both cases the { … } curly brackets bind the modifiers to the variable; "$hex^^" would have simply returned the value of the $hex variable followed by two up-arrow/caret characters
|
When the hex number is relative small, I can use
echo 0xFF| mawk '{ printf "%d\n", $1}'to convert hex to dec.
When then hex number is huge, mawk does not work any more, e.g.
echo 0x8110D248 | mawk '{ printf "%d\n", $1 }'outputs 2147483647 (which is wrong, 2147483647 is equivalent to 0x7FFFFFFF).
How can I convert larger numbers?
I have a lot of numbers (one number per line, more than 10M) to be processed, e.g: each 0xFF\n 0x1A\n 0x25\n. How to make it work for such occasion? By xargs? Is there any better method? xargs is really slow.
| Converting HEX to DEC is out of range by using `mawk` |
easy enough with grep
$ cat ip.txt
123.434
1456.8123
2536.577
345.95553
23643.1454 $ grep -o '^[0-9]*\.[0-9]' ip.txt
123.4
1456.8
2536.5
345.9
23643.1^ start of line
[0-9]* zero or more digits
\. match literal dot character
[0-9] match a digit
since -o option of grep is used, only matched portion is printed, effectively removing remaining charactersIf there are other columns, use sed
$ cat ip.txt
123.434 a
1456.8123 b
2536.577 c
345.95553 d
23643.1454 e$ sed -E 's/^([0-9]*\.[0-9])[0-9]*/\1/' ip.txt
123.4 a
1456.8 b
2536.5 c
345.9 d
23643.1 e-E use extended regex
required pattern is captured in () and \1 used in replacement section
[0-9]* after the capture group gets deletedFurther reading:What does this regex mean?
GNU sed manual |
There seems to be a number of neat and simple methods for rounding all numbers in a column to 1 decimal place, using awk's printf or even bash's printf. However I can't find an equally simple method for just reducing all numbers in a column to 1 decimal place (but not round up). The simplest method for sorting this at the moment would be to round to 2 decimal places and then remove the last character from every line in column 1. Anyone got a better method for this? An example input and output would be as follows:
Input
123.434
1456.8123
2536.577
345.95553
23643.1454 Output
123.4
1456.8
2536.5
345.9
23643.1 | Round down/truncate decimal places within column |
FWIW,
prob=$(echo "0.0139" | bc)is unnecessary - you can just do
prob=0.0139Eg,
$ prob=0.0139; echo "scale=5;1/$prob" | bc
71.94244There's another problem with your code, apart from the underflow issue. Bash arithmetic may not be adequate to handle the large numbers in your nCk2 function. Eg, on a 32 bit system passing 10 to that function returns a negative number, -133461297271.
To handle the underflow issue you need to calculate at a larger scale, as mentioned in the other answers. For the parameters given in the OP a scale of 25 to 30 is adequate.
I've re-written your code to do all the arithmetic in bc. Rather than just piping commands into bc via echo, I've written a full bc script as a here document inside a Bash script, since that makes it easy to pass parameters from Bash to bc.
#!/usr/bin/env bash# Binomial probability calculations using bc
# Written by PM 2Ring 2015.07.30n=144
p='1/72'
m=16
scale=30bc << EOF
define ncr(n, r)
{
auto v,i v = 1
for(i=1; i<=r; i++)
{
v *= n--
v /= i
}
return v
}define binprob(p, n, r)
{
auto v v = ncr(n, r)
v *= (1 - p) ^ (n - r)
v *= p ^ r
return v
}sc = $scale
scale = sc
outscale = 8n = $n
p = $p
m = $mfor(i=0; i<=m; i++)
{
v = binprob(p, n, i)
scale = outscale
print i,": ", v/1, "\n"
scale = sc
}
EOFoutput
0: .13345127
1: .27066174
2: .27256781
3: .18171187
4: .09021610
5: .03557818
6: .01160884
7: .00322338
8: .00077747
9: .00016547
10: .00003146
11: .00000539
12: .00000084
13: .00000012
14: .00000001
15: 0
16: 0 |
Following code calculates the Binomial Probability of a success event k out of n trials:
n=144
prob=$(echo "0.0139" | bc)echo -e "Enter no.:"
read passednok=$passedno
nCk2() {
num=1
den=1
for((i = 1; i <= $2; ++i)); do
((num *= $1 + 1 - i)) && ((den *= i))
done
echo $((num / den))
}binomcoef=$(nCk2 $n $k)binprobab=$(echo "scale=8; $binomcoef*($prob^$k)*((1-$prob)^($n-$k))" | bc)echo $binprobabWhen for $passedno (=k) "5" is entered, then the result is shown as 0 (instead of "0.03566482") whereas with "4" passed I get ".07261898".
How can I print the output with given precision of 8 decimal digits without getting the rounded value of the output?
| bc scale: How to avoid rounding? (Calculate small binomial probability) |
You can do it in a single GNU awk:
gawk -F ',' '
{
for(i=1;i<=NF;i++){matrix[i][NR]=$i}
}
END{
for(i=1;i<=NF;i++){asort(matrix[i])}
for(j=1;j<=NR;j++){
for(i=1;i<NF;i++){
printf "%s,",matrix[i][j]
}
print matrix[i][j]
}
}
' filefor(i=1;i<=NF;i++){matrix[i][NR]=$i}Multidimensional array (GNU extension) matrix gets populated, so that matrix[i][j] contains the number of column i, row j.for(i=1;i<=NF;i++){asort(matrix[i])}Sorts each column (GNU extension).Finally
for(j=1;j<=NR;j++){
for(i=1;i<NF;i++){
printf "%s,",matrix[i][j]
}
print matrix[i][j]
}Prints a sequence of a[1],, a[2],, ..., a[NF-1],, a[NF]\n for each line.
|
I'm trying to numerically sort every column individually in a very large file. I need the command to be fast, so I'm trying to do it in an awk command.
Example Input:
1,4,2,7,4
9,2,1,1,1
3,9,9,2,2
5,7,7,8,8Example Output:
1,2,1,1,1
3,4,2,2,2
5,7,7,7,4
9,9,9,8,8I made something that will do the job (but its not the powerful awk command I need):
for i in $(seq $NumberOfColumns); do
SortedMatrix=$(paste <(echo "$SortedMatrix") <(awk -F ',' -v x=$i '{print $x}' File | sort -nr) -d ,)
donebut it is very slow!
I've tried to do it in awk and I think I'm close:
SortedMatrix=$(awk -F ',' 'NR==FNR {for (i=1;i<=NF;i++) print|"sort -nr"}' File)But it doesn't output columns (just one very long column), I understand why its doing this but I don't know how to resolve it, I was thinking of using paste inside awk but I have no idea how to implement it.
Does anyone know how to do this in awk? Any help or guidance will be much appreciated
| Numerical sorting of every column in a file individually using awk |
The output of awk contains only one column, so no 16th column.
So sort sees all identical empty sort keys and what you observe is the result of the last resort sort (lexical sort on the whole line) which you can disable with the -s option in some implementations.
Here you want:
awk -F'[:,]' '{print $16}' test.json | sort -nNow, if you want to sort the file on the 16th column, beware sort supports only one character column delimiter, so you'd need to preprocess the input:
sed 's/[:,]/&+/g' test.json | sort -t+ -k16,16n | sed 's/\([:,]\)+/\1/g'Here appending a + to every : or ,, sorting with + as the column delimiter and removing the + afterwards.
|
For example I have some test file and want to sort it by the column values. After
awk -F'[:,]' '{print $16}' test.jsonI get next output for the column 16:
123
457
68
11
939
11
345
9
199
13745Now, when I want to sort it numeric, I use
awk -F'[:,]' '{print $16}' test.json | sort -nk16but I just get back not nummeric sort...
11
11
123
13745
199
345
457
68
9
939What is a reason, I thought -n parameter is enough for the nummeric sort....
| Numeric sort by the column values |
you will need a loop that goes to all column
{ for(i=1;i<=NF;i++) ...and arrays
... total[i]+=$i ; sq[i]+=$i*$i ; }this result in a command line like (for average)
awk '{ for(i=1;i<=NF;i++) total[i]+=$i ; }
END { for(i=1;i<=NF;i++) printf "%f ",total[i]/NR ;}' full program
I use this awk to compute mean and variance, however I don't have you result.
{ for(i=1;i<=NF;i++) {total[i]+=$i ; sq[i]+=$i*$i ; } }
END { for(i=1;i<=NF;i++) printf "%f ",total[i]/NR ;
printf "\n" ;
for(i=1;i<=NF;i++) printf "%f ",sq[i]/NR-(total[i]/NR)**2 ;
printf "\n" ;
} |
I have a large data file dataset.csv with 7 numeric columns. I have read that AWK would be the fastest/efficient way to calculate the mean and variance for each column. I need an AWK command that goes through the CSV file and outputs the results into a summary CSV. A sample dataset:
1 1 12 1 0 0 426530
1 1 12 2 0 0 685455
3 4 12 3 1 0 1182080
1 1 12 4 0 1 3090
2 1 13 5 0 0 386387
1 3 12 6 0 2 233430
3 1 11 7 1 0 896919
1 1 12 8 0 0 16441The resulting summary csv is seen below. The first row corresponds to the mean of each column and the second row is the variance(based on sample).
1.625 1.625 12 4.5 0.25 0.375 478791.5
0.839285714 1.410714286 0.285714286 6 0.214285714 0.553571429 1.74812E+11I have been able to calculate single column values however, I need it to run through all of the columns
awk -F' ' '{ total += $1 } END {print total/NR}' dataset.csv > output.csv | Using AWK to calculate mean and variance of columns |
With GNU sort and GNU split, you can do
split -l 20 file.txt --filter "sort -nk 4|tail -n 1"The file gets splitted in packets of 20 lines, then the filter option filters each packet by the given commands, so they get sorted numerically by the 4th key and only the last line (highest value) extracted by tail.
|
I have a file that has 1000 text lines. I want to sort the 4th column at each 20 lines interval and print the output to another file. Can anybody help me with sorting them with awk or sed?
Here is an example of the data structure input
1 1.1350 1092.42 0.0000
2 1.4645 846.58 0.0008
3 1.4760 840.01 0.0000
4 1.6586 747.52 0.0006
5 1.6651 744.60 0.0000
6 1.7750 698.51 0.0043
7 1.9216 645.20 0.0062
8 2.1708 571.14 0.0000
9 2.1839 567.71 0.0023
10 2.2582 549.04 0.0000
11 2.2878 541.93 1.1090
12 2.3653 524.17 0.0000
13 2.3712 522.88 0.0852
14 2.3928 518.15 0.0442
15 2.5468 486.82 0.0000
16 2.6504 467.79 0.0000
17 2.6909 460.75 0.0001
18 2.7270 454.65 0.0000
19 2.7367 453.04 0.0004
20 2.7996 442.87 0.0000
1 1.4962 828.64 0.0034
2 1.6848 735.91 0.0001
3 1.6974 730.45 0.0005
4 1.7378 713.47 0.0002
5 1.7385 713.18 0.0007
6 1.8086 685.51 0.0060
7 2.0433 606.78 0.0102
8 2.0607 601.65 0.0032
9 2.0970 591.24 0.0045
10 2.1033 589.48 0.0184
11 2.2396 553.61 0.0203
12 2.2850 542.61 1.1579
13 2.3262 532.99 0.0022
14 2.6288 471.64 0.0039
15 2.6464 468.51 0.0051
16 2.7435 451.92 0.0001
17 2.7492 450.98 0.0002
18 2.8945 428.34 0.0010
19 2.9344 422.52 0.0001
20 2.9447 421.04 0.0007 expected output:
11 2.2878 541.93 1.1090
12 2.2850 542.61 1.1579 Each n interval has only one highest (unique) value.
| How to sort each 20 lines in a 1000 line file and save only the sorted line with highest value in each interval to another file? |
Use variables to store data that you need to remember from one line to the next.
Line N+1 in the output is calculated from lines N and N+1 in the input, so you need variables to store the content of the previous line. There are two fields per line, so use one variable for each.
Lines 1 and 2 get special treatment (title line, and not enough data). You can match specific line numbers by testing the special variable NR. The instruction next causes the rest of the processing to be skipped for the current line.
Since this processing is fairly simple, it's enough to use variables for the content of the previous line. Once you've processed the current line, using the variables that were set when processing the previous line, store the contents of the current line into the variables.
NR == 1 { print "Slope"; next; }
NR == 2 { print "-"; }
NR >= 3 { print ($2 - y) / ($1 - x) }
NR >= 2 { x = $1; y = $2; }Recall that awk runs the code for each input line in turn, and the expression before each braced group is a condition for running this group, so this is equivalent to the following pseudocode:
for each line {
NR = current line number;
$1 = first field; $2 = second field;
if (NR == 1) { print "Slope"; next; }
…
}Alternatively, you might find the code more readable if you give names both to the previous line's data and to the current line's data. At the end of the current line processing, transfer the data from the “current” variables to the “previous” variables.
NR == 1 { print "Slope"; next; }
NR == 2 { print "-"; }
NR >= 2 { current_x = $1; current_y = $2; }
NR >= 3 { print (current_y - previous_y) / (current_x - previous_x) }
NR >= 2 { previous_x = current_x; previous_y = current_y; } |
awk newbie here.
Suppose I have two columns of data, and I want to calculate the rate of increase, given by delta(y)/delta(x). How would I do this in an awk script? What I've learnt so far only deals with line by line manipulation, and I'm not sure how I'd work with multiple lines.
Note: Suppose I have N data points, I would get N-1 values of the slope/rate.
Example:Input
x y
2 4
3 5
4 7Output
Slope
-
1
2Is awk the best option here? or is some other tool better?
| Calculating rates / 'derivative' with awk |
The the POSIX character class you are trying to use must be placed inside a regular bracket expression, so [[:digit:]] not [:digit:]. You're also not limited to using just the one character class in the bracket expression, so e.g. [[:digit:][:punct:]] or [^[:digit:]] can be used.
Your command actually means "print all lines that do not match any of the characters :, d, i, g or t:
$ printf 'a\nd\ni\n:\n'
a
d
i
:
$ printf 'a\nd\ni\n:\n' | sed -n '/[:digit:]/!p'
aWhat you wanted was:
$ iostat | sed -n '/[[:digit:]]/!p'avg-cpu: %user %nice %system %iowait %steal %idleDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtnNote that newer GNU sed versions (v.4.4 in this example) will print a warning:
$ iostat | sed -n '/[:digit:]/!p'
sed: character class syntax is [[:space:]], not [:space:] |
Why does the following command print numerical values?
$ iostat | sed -n '/[:digit:]/!p' 1.56 1.38 0.31 0.34 0.03 96.38 | sed: !p command strange behavior |
As you note, there are many possibilities. The following versions with awk are roughly equivalent to the perl you included with your question:(with GNU awk):
awk -F : '{acch+=$1;accm+=$2;} ENDFILE { \
print acch+int(accm/60) ":" accm%60; }' [inputfile](with "POSIX" awk):
awk -F : '{acch+=$1;accm+=$2;print acch+int(accm/60) \
":" accm%60; }' [inputfile] | tail -1 |
I have a text file containing flight times, hours and minutes separated by colon sign, one per line:
00:50
00:41
00:43
00:50
01:24I am currently using Apple's Numbers application with a simple formula to calculate the total time (result being 4:28 for the example data above).
However, I was wondering if there is an easier way to achieve this. A perl script would work all right, but how about using Unix shell commands and/or scripting? I am basically looking for anything short and simple.
Yes, I can manage with Numbers, but this would be nice to know and learn :).
p.s. Posting perl script to achieve this in case someone else needs it:
#! /usr/bin/perl
while (<>) {
chomp;
($hours, $minutes) = split (/:/, $_);
$totalhours += $hours;
$totalminutes += $minutes;
}
while ($totalminutes > 59) {
$totalhours++;
$totalminutes -= 60;
}
printf ("%d:%02d\n", $totalhours, $totalminutes); | How to sum times in a text file using command-line? |
The leading zeros on the input value are causing the shell to interpret it as an octal number.
You can force decimal conversion using 10# e.g.
$ printf "Please enter the ticket number:\t"; read vTICKET; vTICKET=$(printf %04d "$((10#$vTICKET))" ); printf "$vTICKET\n";
Please enter the ticket number: 072
0072Note that in bash, you can assign the results of a printf to a variable directly using -v e.g. printf -v vTICKET %04d "$((10#$vTICKET))"
See also How do I stop Bash from interpreting octal code instead of integer?
|
I am currently looking for an alternative to the following code that works a little less 'wonky'.
printf "Please enter the ticket number:\t"; read vTICKET; vTICKET=$(printf %04d "$vTICKET"); printf "$vTICKET\n";If I input 072 as the input, this is what I see
Please enter the ticket number: 072
0058I am wondering if there is another way I can be a little more forgiving on the input or with the read command? printf seemed like the cool way to add leading zeroes without actually testing string length.
| Add leading zeroes to a user's input but is being transformed with printf |
You can use awk like this :
grep "pattern" file.txt | awk '{printf "%s ", $3}'Depending of what you do with grep, but you should consider using awk for greping itself :
awk '/pattern/{printf "%s ", $3}' file.txtAnother way by taking advantage of bash word-spliting :
echo $(awk '/pattern/{print $3}' file.txt)Edit : I have a more funny way to join values :
awk '/pattern/{print $3}' file.txt | paste -sd " " - |
I'm currently writing a shell script that separate values from their identifiers (retrieved from grep).
For example, if I grep a certain file I will retrieve the following information:
value1 = 1
value2 = 74
value3 = 27I'm wondering what UNIX command I can use to take in the information and convert it to this format:
1 74 27 | How to separate numerical values from identifiers |
TL;DR
While -n will sort simple floats such as 1.234, the -g option handles a much wider range of numerical formats but is slower.
Also -g is a GNU extension to the POSIX specification.From man sort, the relevant parts are:
-g, --general-numeric-sort, --sort=general-numeric
Sort by general numerical value. As opposed to -n, this option
handles general floating points. It has a more permissive format
than that allowed by -n but it has a significant performance
drawback.... -n, --numeric-sort, --sort=numeric
Sort fields numerically by arithmetic value. Fields are supposed
to have optional blanks in the beginning, an optional minus sign,
zero or more digits (including decimal point and possible thou-
sand separators)....STANDARDS
The sort utility is compliant with the IEEE Std 1003.1-2008 (``POSIX.1'')
specification. The flags [-ghRMSsTVz] are extensions to the POSIX specification....NOTES... When sorting by arithmetic value, using -n results in much better perfor-
mance than -g so its use is encouraged whenever possible.However, the full documentation is provided by info and not man.
From 7.1 sort: Sort text files, the description/distinction is clearer:‘-g’
‘--general-numeric-sort’
‘--sort=general-numeric’
Sort numerically, converting a prefix of each line to a long
double-precision floating point number. See Floating point. Do
not report overflow, underflow, or conversion errors. Use the
following collating sequence:Lines that do not start with numbers (all considered to be equal).
NaNs (“Not a Number” values, in IEEE floating point arithmetic) in a consistent but machine-dependent order.
Minus infinity.
Finite numbers in ascending numeric order (with -0 and +0 equal).
Plus infinity.Use this option only if there is no alternative; it is much slower
than --numeric-sort (-n) and it can lose information when
converting to floating point.
You can use this option to sort hexadecimal numbers prefixed with
‘0x’ or ‘0X’, where those numbers are not fixed width, or of
varying case. However for hex numbers of consistent case, and left
padded with ‘0’ to a consistent width, a standard lexicographic sort
will be faster.
...
‘-n’
‘--numeric-sort’
‘--sort=numeric’
Sort numerically. The number begins each line and consists of optional
blanks, an optional ‘-’ sign, and zero or more digits possibly
separated by thousands separators, optionally followed by a
decimal-point character and zero or more digits. An empty number is
treated as ‘0’. The LC_NUMERIC locale specifies the decimal-point
character and thousands separator. By default a blank is a space or a
tab, but the LC_CTYPE locale can change this.
Comparison is exact; there is no rounding error.
Neither a leading ‘+’ nor exponential notation is recognized. To
compare such strings numerically, use the --general-numeric-sort
(-g) option.A quick demonstration:
$ printf '%s\n' 0.1 10 1e-2 | sort -n
0.1
1e-2
10$ printf '%s\n' 0.1 10 1e-2 | sort -g
1e-2
0.1
10 |
What is the difference between the two sort options -n and -g?
It's a bit confusing to have too much detail but not enough adequate documentation.
| What's the difference between "sort -n" and "sort -g"? |
The 0+ needs to be prefixed to each $1 to force a numeric conversion. max does not need 0+ -- it is already cast to numeric when it is stored.
Paul--) AWK='
> BEGIN { max = 0; }
> 0+$1 > max { max = 0 + $1; }
> END { print max; }
> '
Paul--) awk "${AWK}" <<[][]
> 2.0
> 2.0e-318
> [][]
2
Paul--) awk "${AWK}" <<[][]
> 2.0e-318
> 2.0
> [][]
2 |
I am trying to find the maximum value of a column of data using gawk:
gawk 'BEGIN{max=0} {if($1>0+max) max=$1} END {print max}' dataset.datwhere dataset.dat looks like this:2.0
2.0e-318 The output of the command is2.0e-318which is clearly smaller than 2.
Where is my mistake?
Edit
Interestingly enough, if you swap the rows of the input file, the output becomes2.0Edit 2
My gawk version is GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.1.2).
| Why does gawk (sometimes?) think 2.0e-318 > 2.0? |
Command: awk '$2 !~ /^-/{print $0}' fileoutput1577.57 47
1578.87 49
1580.15 51 | I use this command
awk 'NR%2{t=$1;next}{print $1-t,$2}'to get the distance between two consecutive Y points in a file. But I would like to have all positive numbers. How to get that ? like something as modulus.
1577 -46.1492
1577.57 47
1578 -47.6528
1578.87 49
1579 -49.2106
1580 -50.7742
1580.15 51 | How to obtain only positive value in second column [closed] |
Task:sdout the load avg from top in a decimal form like(e.g 0.23)Solution:
top -b -n 1 | perl -lane 'print "$1.$2" if /load average: (\d+)[,.](\d+)/'Notes:
This retrieves the 1m load average. It looks like this is what you want, but you should state it clearly. If needed, the code can be easily modified to retrieve the load averages over 5 minutes or 15 minutes, or even all three.
As pointed out by @terdon, uptime might be a better starting point than top in this case.
After the first two lines, you obscurely describe what you want to do with the result. Subsequent steps you want to take should be the subject of new questions.
In Perl, numbers are auto-casted to strings and vice-versa. Any numerical operation can be performed on a string representing a number. e.g. print "$1.$2"+11.11Question 2:
This part is about the second question, which is totally unrelated to the first one.
I urge the OP to post this question separately.How Can I convert the string value to decimal/float/integer ?Better written as: Performing numeric comparisons on strings with Chef's InSpec.
Solution:
Convert the string to a numeric format, with either to_i or to_f.
Example:
describe command("echo 1.00") do
its("stdout.to_f") { should be < 1.03 }
endExplanation:
Very reasonably, stdout is treated as a string. Also very reasonably, numeric comparisons require the two numbers to be...numbers. Luckily, conversion can be done with the handy Ruby string methods: to_i, to_f, to_r and to_c.
|
Task:
stdout the load avg from top in a decimal form like (e.g 0.23)
Details:
I need to parse this script to Chef's inspec and check if its result is bigger/smaller than something, for example:
describe command("top -b -n 1 | awk '/load average/ { sub(/,/,\".\",$10); printf \"%f\n\",$10}'") do
its("stdout") { should eq 0.00 }
endThis example returns ""
But now when I think about it, I could compare with a file in /proc/loadavg
Progress
Used this resource: Grab the load average with top
With this command, I get a good representation of the output, but it's a string and I can't do any mathematical operations with it:
martin@martinv-pc:~$ top -b -n 1 | awk '/load average/ { printf "%s\n", $10}'
0,63,But when I try to change the printf to decimal/float, I get an error:
martin@martinv-pc:~$ top -b -n 1 | awk '/load average/ { printf "%f\n", $10}'
0.000000
martin@martinv-pc:~$ top -b -n 1 | awk '/load average/ { printf "%d\n", $10}'
0Can't echo, tried with cut, bad idea -- not working:
martin@martinv-pc:~$ top -b -n 1 | awk '/load average/ { printf "%s\n", $10}'|cut -c1-4
0,15
martin@martinv-pc:~$ top -b -n 1 | awk '/load average/ { printf "%s\n", $10}'|$((cut -c1-4))
-4: command not foundAnother attempt:
martin@martinv-pc:~$ top -b -n 1 | awk '/load average/ BEGIN { printf "%.f\n", $10};'
awk: line 1: syntax error at or near BEGINQuestion:
How can I convert the string value to decimal/float/integer ?
ps -o user,rss output:
[vagrant@localhost ~]$ ps -o user,rss
USER RSS
vagrant 736
vagrant 1080 | Converting awk printf string to decimal |
The regex below will extract the number of bytes, only the number:
contentlength=$(
LC_ALL=C sed '
/^[Cc]ontent-[Ll]ength:[[:blank:]]*0*\([0-9]\{1,\}\).*$/!d
s//\1/; q' headers
)After the above change, the contentlength variable will be only made of decimal digits (with leading 0s removes so the shell doesn't consider the number as octal), thus the 2 lines below will display the same result:
echo "$(($contentlength/9))"echo "$((contentlength/9))" |
I'm currently working on a project called 'dmCLI' which means 'download manager command line interface'. I'm using curl to multi-part download a file. And I'm having trouble with my code. I can't convert string to int.
Here's my full code. I also uploaded my code into Github Repo. Here it is:
dmCLI GitHub Repository
#!/bin/bash
RED='\033[0;31m'
NC='\033[0m'help() {
echo "dmcli [ FILENAME:path ] [ URL:uri ]"
}echo -e "author: ${RED}@atahabaki${NC}"
echo -e " __ _______ ____"
echo -e " ___/ /_ _ / ___/ / / _/"
echo -e "/ _ / ' \/ /__/ /___/ / "
echo -e "\_,_/_/_/_/\___/____/___/ "
echo -e " "
echo -e "${RED}Downloading${NC} has never been ${RED}easier${NC}.\n"if [ $# == 2 ]
then
filename=$1;
url=$2
headers=`curl -I ${url} > headers`
contentlength=`cat headers | grep -E '[Cc]ontent-[Ll]ength:' | sed 's/[Cc]ontent-[Ll]ength:[ ]*//g'`
acceptranges=`cat headers | grep -E '[Aa]ccept-[Rr]anges:' | sed 's/[Aa]ccept-[Rr]anges:[ ]*//g'`
echo -e '\n'
if [ "$acceptranges" = "bytes" ]
then
echo File does not allow multi-part download.
else
echo File allows multi-part download.
fi
echo "$(($contentlength + 9))"
# if [acceptranges == 'bytes']
# then
# divisionresult = $((contentlength/9))
# use for to create ranges...
else
help
fi# First get Content-Length via regex or request,
# Then Create Sequences,
# After Start Downloading,
# Finally, re-assemble to 1 file.I want to divide contentlength's value by 9. I tried this:
echo "$(($contentlength/9))"It's getting below error:
/9")syntax error: invalid arithmetic operator (error token is "
I'm using localhost written in node.js. I added head responses. It returns Content-Length of the requested file, and the above dmCLI.sh gets correctly the Content-Length value.
./dmcli.sh home.html http://127.0.0.1:404/A HEAD request to http://127.0.0.1:404/ returns: Content-Length: 283, Accept-Range: bytes
The dmCLI works for getting the value, but when I want to access its value, it won't work.
Simple actions work like:
echo "$contentlength"But I can't access it by using this:
echo "$contentlength bytes"and here:
echo "$(($contentlength + 9))"returns 9 but I'm expecting 292. Where's the problem; why is it not working?
| String to integer in Shell |
I think this should do it:
grep -E '([5-9][0-9]{3}|[0-9]{5,})ms' | grep -v 5000msHow does it work?It uses -E so the regex is of the "modern" format (also called extended). It just makes the typing easier in our case as we can save some \ chars.
The (...|...)ms searches for two alternatives followed by the string ms. This is necessary as regex can not compare numbers so I can not say something like >= 5000.
The first alternative is [5-9][0-9]{3} which will match any string that starts with a number from 5 to 9 followed by 3 occurrences of numbers from 0 to 9. Those are all numbers >= 5000 and < 10000.
The second alternative will match a string of 5 or more digits, that is any number >= 10000.
At the end we pipe the result to grep -v 5000ms to filter out any occurrence of 5000ms because you said greater than 5000. If you want greater or equal just leave that out.Where to learn more?
Read man 1 grep and man 7 regex.
|
I have a .log file where each entry is on the form
2018-09-28T10:53:48,006 [Jetty-6152 ] INFO [correlationId] my.package.service:570 - Inbound request: 1.2.3.4 - GET - 12342ms - 200 - /json/some/resource
2018-09-28T11:53:48,006 [Jetty-6152 ] INFO [correlationId] my.package.service:570 - Inbound request: 1.2.3.4 - GET - 204ms - 200 - /json/other/resourceHow do I find all entries where the request took longer than 5 seconds? that is the entry contains the text "[numberGreaterThan5000]ms"?
| How do I grep for occurences of "[numberLargerThan5000]ms"? |
awk solution:
netstat -nau | awk -F'[[:space:]]+|:' 'NR>2 && $5>=32000 && $5<=64000'The output in your case would be as:
udp 0 0 10.0.0.20:55238 0.0.0.0:*
udp 0 0 10.0.0.20:55240 0.0.0.0:*
udp 0 0 10.0.0.20:55244 0.0.0.0:*
udp 0 0 10.0.0.20:32246 0.0.0.0:*
udp 0 0 10.0.0.20:55248 0.0.0.0:*-F'[[:space:]]+|:' - field separator
NR>2 && $5>=32000 && $5<=64000 - checks if port number is in the needed rangeAlternative egrep solution:
netstat -nau | egrep ':(3[2-9]|[45][0-9])[0-9]{3}|6[0-3][0-9]{3}|64000'(3[2-9]|[45][0-9])[0-9]{3} - will cover numbers from 32000 to 59999
6[0-3][0-9]{3}|64000 - will cover numbers from 60000 to 64000 |
In my netstat output, I want to extract port range between 32000-64000. I have tried egrep "^[3,4,5,6]" but I need to start from 32000. Should I use awk or some kind of script?
Linux# netstat -nau
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 10.0.0.20:55238 0.0.0.0:*
udp 0 0 10.0.0.20:55240 0.0.0.0:*
udp 0 0 10.0.0.20:31242 0.0.0.0:*
udp 0 0 10.0.0.20:55244 0.0.0.0:*
udp 0 0 10.0.0.20:32246 0.0.0.0:*
udp 0 0 10.0.0.20:55248 0.0.0.0:*
udp 0 0 10.0.0.20:12250 0.0.0.0:*
udp 0 0 10.0.0.20:19252 0.0.0.0:* | grep regex port range from netstat |
You can use the combination find, grep and awk command to get the desired result. The below is a oneliner which will print the file which has the maximum temperature recorded.
find . -mindepth 3 -exec echo -n "{} " \; -exec grep "PROCESSOR_ZONE" {} \; |
awk '{
split($4,val,"/");
gsub("C","",val[1]);
if (max<val[1]) {file=$1; max=val[1]}
} END {print(file)}'Output
./2012.04.16/00:10/temps.txtBelow is the script version of the oneliner.
#!/bin/bash# The path where temperature directories and files are kept
path="/tmp/tmp.ADntEuTlUT/"# Temp file
tempfile=$(mktemp)# Get the list of files name and their corresponding
# temperature data.
find "${path}" -mindepth 3 -exec echo -n "{} " \; -exec grep "PROCESSOR_ZONE" {} \; > "${tempfile}"# Parse though the temp file to find the maximum
# temperature based on Celsius
awk '{split($4,val,"/");gsub("C","",val[1]);if(max<val[1]){file=$1;max=val[1]}} END{print(file)}' "${tempfile}"# Removing the temp file
rm -f "${tempfile}" |
For example, there are some temperature data at those folders in different time.
temps.txt contains the temperature number. So how can I use bash script to find out the maximum temperature? (the results only show the date,time and number of temperature,e.g. ./2011.10.20/00:00/temps.txt 27C).
$ ls
2011.10.20 2012.01.20 2012.04.16 2012.07.12 2012.10.07
2011.10.21 2012.01.21 2012.04.17 2012.07.13 2012.10.08
2011.10.22 2012.01.22 2012.04.18 2012.07.14 2012.10.09
$ cd 2011.10.20$ ls
00:00 02:25 04:50 07:15 09:40 12:05 14:30 16:55 19:20 21:45
00:05 02:30 04:55 07:20 09:45 12:10 14:35 17:00 19:25 21:50
00:10 02:35 05:00 07:25 09:50 12:15 14:40 17:05 19:30 21:55
$ cd 00:00
$ ls
temps.txt
$ cat temps.txt
Sensor Location Temp
------ -------- ----
#1 PROCESSOR_ZONE 27C/80F | How to find out the biggest number in many documents that contains different numbers |
This can be done with Python like:
Code:
#!/usr/bin/python
import re
import sysSPACES = re.compile('\s+')data_by_lat_long = {}with open(sys.argv[1]) as f:
# get and print header
line = next(f)
print(line.strip()) for line in f:
data = SPACES.split(line.strip())
data_by_lat_long.setdefault((data[0], data[1]), []).append(data[2:])for lat_long, data in data_by_lat_long.items():
results = zip(*data)
if '-9.99' in results[0]:
results[0] = ('-9.99', )
avg = tuple(str(sum(float(x) for x in d) / len(d)) for d in results)
print('\t'.join(lat_long + avg))Results:
Lat Long air_temp sst wind_speed wave_height wave_period
65.3 7.3 3.15 6.4 6.35 3.0 9.0
61.1 1 -9.99 8.8 8.7 3.5 7.0
61.6 1.3 -9.99 8.7 8.75 3.5 7.0and,
Lat Long air_temp sst wind_speed wave_height wave_period
0 -0.7 24.0 24.8 5.7 1.0 3.0
0 0.1 22.9 22.5 7.7 1.0 5.0
0 0.3 27.0 27.0 4.6 2.0 12.0
0 0.2 24.0 26.0 4.6 2.0 6.0
0 0.8 27.0 28.0 7.2 1.5 9.0
0 -0.3 27.0 27.5 5.7 1.5 8.0
0 0.5 24.6 25.0 10.3 2.0 6.0
0 -0.2 -9.99 30.4 3.6 1.5 8.0
0 0.4 23.0 23.0 8.2 1.5 3.0
0 0.7 -9.99 27.5 6.6 1.25 9.0
0 0 25.3916666667 25.9416666667 3.51666666667 1.66666666667 9.08333333333
0 0.6 26.7 27.0 5.1 1.5 10.0 |
I have a file of about 13 million lines like:
Lat Long air_temp sst wind_speed wave_height wave_period
65.3 7.3 4.3 8.8 7.7 4 8
61.6 1.3 -9.99 8.8 9.8 4 7
61.2 1.1 -9.99 8.8 7.7 3 7
61.1 1 -9.99 8.8 8.7 3.5 7
61 1.7 -9.99 8.8 10.8 4 7
60.6 1.7 -9.99 8.8 8.2 4 10
60.6 3.7 -9.99 8.8 8.2 3.5 8
60.6 -4.9 4.7 8.8 10.3 3.5 7
60.4 1.2 5.1 7 15 2 4
59.6 2.2 2.3 7.7 4.6 3.5 9
59.5 1.6 -9.99 7.7 3.6 4 8I have had 72 files with all of these variables. I have merged them in just a single one and deleted the duplicates. What I have to do is when lat and long is the same for 2 lines, I have to calculate the average of the columns. For example:
Lat Long air_temp sst wind_speed wave_height wave_period
61.1 1 -9.99 8.8 8.7 3.5 7
61.6 1.3 -9.99 8.8 9.8 4 7
61.6 1.3 3 8.6 7.7 3 7
65.3 7.3 4.3 8.8 7.7 4 8
65.3 7.3 2 4 5 2 10outputfile will look something like this:
Lat Long air_temp sst wind_speed wave_height wave_period
61.1 1 -9.99 8.8 8.7 3.5 7
61.6 1.3 -9.99 8.7 8.75 3.5 7
65.3 7.3 3.15 6.4 6.35 3 9and therefore:if air_temp=-9.99 the 'average' calculated will be -9.99 as this shows missing data
the average is calculated over as many points there are with the same long and lat - in case there are 2 points with the coordinates 61.6 and 1.3, there will be just one line with the variables (air_temp, sst, wind_speed, wave_height and wave_period) calculated on an average.the original file:
Lat Long air_temp sst wind_speed wave_height wave_period
0 0.1 22.9 22.5 7.7 1 5
0 0.2 24 26 4.6 2 6
0 0 24.1 25.3 3 1.5 9
0 0 24.4 25.3 3 1.5 8
0 0 24.5 25.3 2 1.5 8
0 0 24.7 25.2 1 1.5 10
0 0 24.8 25.1 3 1.5 8
0 0 24.8 25.2 2 1.5 12
0 0 24.9 25.2 5 1.5 9
0 0 25.2 25.5 2 3.5 10
0 0 25 25.2 5 1.5 9
0 0 26.9 27.2 4 1.5 10
0 0 26.9 27.2 5 1.5 9
0 0 28.5 29.6 7.2 1.5 7
0 -0.2 -9.99 30.4 3.6 1.5 8
0 0.3 27 27 4.6 2 12
0 -0.3 27 27.5 5.7 1.5 8
0 0.4 23 23 8.2 1.5 3
0 0.5 24.6 25 10.3 2 6
0 0.6 26.7 27 5.1 1.5 10
0 -0.7 24 24.8 5.7 1 3
0 0.7 24 27 7.2 1.5 10
0 0.7 -9.99 28 6 1 8
0 0.8 27 28 7.2 1.5 9I don't know why it doesn't sort completely (why there is 0.1 and 0.2 in front), but nonetheless the desired output is:
Lat Long air_temp sst wind_speed wave_height wave_period
0 0.1 22.9 22.5 7.7 1 5
0 0.2 24 26 4.6 2 6
0 0 25.3916666667 25.9416666667 3.5166666667 1.6666666667 9.0833333333
0 -0.2 -9.99 30.4 3.6 1.5 8
0 0.3 27 27 4.6 2 12
0 -0.3 27 27.5 5.7 1.5 8
0 0.4 23 23 8.2 1.5 3
0 0.5 24.6 25 10.3 2 6
0 0.6 26.7 27 5.1 1.5 10
0 -0.7 24 24.8 5.7 1 3
0 0.7 -9.99 27.5 6.6 1.25 9
0 0.8 27 28 7.2 1.5 9 | How can I calculate the average of multiple columns with the same value for the first two columns? |
You can use paste -s to join lines:
shuf -i1-10 | paste -sd, -This uses -i option of shuf to specify a range of positive integers.
The output of seq can be piped to shuf:
seq 10 | shuf | paste -sd, -Or -e to shuffle arguments:
shuf -e {1..10} | paste -sd, - |
I am trying to generate a comma separated unordered list of ints between 1 and 10, I have tried the following but it results in an ordered list:
seq -s "," 10 | shuf | How to generate a comma separated list of random ints |
Use perl:
perl -pe 's/(?<=\[)(\d+)(?=\])/$1+1/ge' prova.txtExplanation:-p means loop over every line and print the result after every line
-e defines the expression to execute on every line
s/from/to/ does a simple substition
s/(\d+)/$1+1/ge matches one or more digits, captures it into $1, and then the e modifier on the end tells perl that the substition string is an expression: $1+1 substitutes the value of $1 plus 1. The g modifier means do this substitution globally, i.e. more than once per line.
(?<=\[) is a positive zero-length lookbehind assertion. That means that what comes after it only matches if it's preceded by [ (which needs to be escaped with \ as [ is a special token in regular expressions). The zero-length thing means that it's not part of what will be replaced.
(?=\]) is a positive zero-length lookahead assertion. That means that what comes before it only matches if it's followed by ] (again escaped).So this will take all numbers between [ and ] and increment that number.
|
Alright, it seems I cannot find a way to do what I need.
Let's say I have a text file A like this
(freqBiasL2[27])
(SatBiasL1[27])
(defSatBiasL2_L1[27])
(defSatBiasSlope[27])
(defSatBiasSigma[27]) (freqBiasL2[28])
(SatBiasL1[28])
(defSatBiasL2_L1[28])
(defSatBiasSlope[28])
(defSatBiasSigma[28]) (freqBiasL2[29])
(SatBiasL1[29])
(defSatBiasL2_L1[29])
(defSatBiasSlope[29])
(defSatBiasSigma[29])and so on.
I want to change the index between [] brackets such that each index i is = i+3.
I tried with a combination of for loop and sed, but not working
for i in {27..107..1}
do
i_new=$((i+3))
sed -e 's/\['$i'\]/\['$i_new'\]/' prova.txt doneThe problem is that it will change the first 27 for 30 but on next interations it will find 2 blocks with index 30 (the changed one and the original).
How can I do it without overwriting the already changed indexes?
Ok thanks, I edit my question for an improvement:
How can I do something similar is if I have 2 indexes like:
(freqBiasL2[32][100])
(SatBiasL1[32][101])
(defSatBiasL2_L1[32][102])
(defSatBiasSlope[32][103])
(defSatBiasSigma[32][104])and I want to increment only the second index ignoring the first one?
| Increment index in file |
Try this:
od -An -N8 -d /dev/random | sed -e 's| ||g' -e 's|\(.\{11\}\).*|\1|' |
I need a command that generates random numbers with 11 digits. How can this be done?
| Generates random numbers with 11 digits |
Pedantic or not, if our man is looking only looking at 3 decimals of precision....
Breaking out the good old awk hammer for the equally good old fashioned lowest denominator, rather than the high falutin' algorithm, just find the lowest error and denominator
echo "1.778" | awk 'BEGIN{en=1; de=101; er=1}{
for (d=100; d>=1; d--) {n=int(d*$1); e=n/d-$1; e=e*e;
if (e<=er && d<de){er=e; de=d; en=n}}
print en":"de, en/de}'So...
16:9 1.77778Something like this could equally be done in pure bash with the appropriate multiplier for the fraction.
If we are having a race
real 0m0.004s
user 0m0.001s
sys 0m0.003s |
I would like to convert a float to a ratio or fraction.
Do we have the option to convert a float 1.778 to ratio as 16:9 or 16/9 in bash, in a fashion similar to Python's fractions module (Fraction(1.778).limit_denominator(100)).
| How can I convert a float to ratio? |
Check this out:
awk -F, '{date1[$4]+=$1;++date2[$4]}END{for (key in date1) print "Average of",key,"is",date1[key]/date2[key]}' file
Average of 27:May:2017 is 2677.57
Average of 26:May:2017 is 1410.02
Average of 25:May:2017 is 2940.02Explanation:
-F, : Defines the delimiter . Alternatively could be awk 'BEGIN{FS=","}...
Then we create two arrays date1 and date2 in which we use the 4th field $4 as array index/key and the first field $1 as value added to the existed value of the same array position.
So for the first row we would have
date1[27:May:2017]+=2415.02
++date2[27:May:2017] --> increases the value by 1 --> value 1 for first line
For the next same date (line 2) we would have
date1[27:May:2017]+=2415.02 + 3465.02
++date2[27:May:2017] --> increases the value by 1 --> value 2 (second line)
Same logic extends to all the lines having the same date and also to all different dates.
At the end , we use a for loop to iterate through the keys of array date1 (or date2 - keys are the same in both arrays => $4) and for every key found we print the key (=the date $4) and we also print the date1[key] value = sum of all $1 values for the same date $4, divided by date2[key] value = numeric count of the lines found having the same date = same $4.
|
I have the following csv format. There are vals from the whole month, but I've chunked it:
2415.02,2203.35,00:17,25:May:2017,
3465.02,2203.35,01:17,25:May:2017,
2465.02,2203.35,12:17,26:May:2017,
465.02,2203.35,13:17,26:May:2017,
245.02,2203.35,14:17,26:May:2017,
2465.02,2203.35,05:17,26:May:2017,
2865.02,2203.35,06:17,27:May:2017,
2490.12,2203.35,07:17,27:May:2017,I need to calculate average of the first column ($1) based on values for that day ($4). Note, that I can reformat date, if that is needed for easier calculation.
My miserable attempt was this:
$ awk '{FS=","; day=$4;value+=$1} END{ print day,value/NR}' file
27:May:2017 2109.41I need output like this:
Average for 25th May is *average_for_25th_day*
Average for 27th May is *average_for_26th_day*
Average for 28th May is *average_for_27th_day* | Calculating average in awk based on column condition in csv |
I don't know of a sort implementation that understands \t or other such character representations, you need to use ANSI-C quoting instead:
sort --field-separator=$'\t' --key=1,1 --key=2,2n t.tsvAlso, according to this macOS man page, "The Apple man page for sort includes GNU long options for all the above, but these have not (yet) been implemented under macOS." In recent releases of macOS, both --key and --field-separator are implemented for sort, but I would still use the standard short options for guaranteed portability:
sort -t $'\t' -k 1,1 -k 2,2n t.tsvThe above command, with macOS, GNU, and busybox sort, returns:
$ sort -t $'\t' -k 1,1 -k 2,2n t.tsv
2022/05/05 -258.03
2022/05/07 -18.10
2022/05/09 -132.60
2022/05/09 -10.74
2022/05/12 -20.20
2022/05/12 -18.56
2022/05/17 -112.91
2022/05/17 -64.78
2022/05/17 -51.43
2022/05/17 -11.00
2022/05/18 -13.96
2022/05/18 -13.96
2022/05/18 -7.51
2022/05/19 -17.08
2022/05/20 -33.08 |
Here's my tab-delimited file t.tsv:
$ cat t.tsv
2022/05/05 -258.03
2022/05/07 -18.10
2022/05/09 -10.74
2022/05/09 -132.60
2022/05/12 -18.56
2022/05/12 -20.20
2022/05/17 -11.00
2022/05/17 -112.91
2022/05/17 -51.43
2022/05/17 -64.78
2022/05/18 -13.96
2022/05/18 -13.96
2022/05/18 -7.51
2022/05/19 -17.08
2022/05/20 -33.08I am using MacOS 12.4 sort (from man page: The sort utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification) to sort first by col 1 in alpha seq ascending, then by col2 in numeric ascending.
$ cat t.tsv|sort --field-separator='\t' --key=1,1 --key=2,2n
2022/05/05 -258.03
2022/05/07 -18.10
2022/05/09 -10.74
2022/05/09 -132.60
2022/05/12 -18.56
2022/05/12 -20.20
2022/05/17 -11.00
2022/05/17 -112.91
2022/05/17 -51.43
2022/05/17 -64.78
2022/05/18 -13.96
2022/05/18 -13.96
2022/05/18 -7.51
2022/05/19 -17.08
2022/05/20 -33.08I'm baffled as to why the second column is isn't being sorted in ascending numeric sequence when the first column is the same. Numerous SE answers to this same question all say that (a) you specify single columns as --key=1,1, and (b) you may apply options such as -n to individual key definitions like --key=2,2n.
Update: I should mention that my shell is bash.
| How do I use sort on multiple columns with different data types |
One possible explanation is that that en_US.utf8 locale is not available on your system.
You can use locale -a to get the list of available locales, locale -a | grep en_US for the list of US English ones.
If that locale was installed, LC_ALL=en_US.utf8 locale -k LC_NUMERIC would output something like:
decimal_point="."
thousands_sep=","
grouping=3;3
numeric-decimal-point-wc=46
numeric-thousands-sep-wc=44
numeric-codeset="UTF-8"and LC_ALL=en_US.utf8 locale thousands_sep would output ,.
Otherwise, you'd likely get an error about the locale not being available.
If on Debian, you can select which locales you want to enable with (as root):
dpkg-reconfigure localesPlease refrain from enabling all possible locales; enabling some locales like those using the BIG5, BIG5HKSCS or GB18030 character sets, would introduce some vulnerabilities on your system (those charsets have characters whose encoding contain the encoding of backtick and backslash causing all sorts of bugs some of which easily turn into vulnerabilities). Some locales have unusual sorting orders or case conversion rules that can also trip some software.
Note that C and POSIX are the only locales (they are meant to be the same) that POSIX guarantees to be found on POSIX systems. It requires the thousand_sep to be the empty string in that locale though which means it's of no use in your case.
On GNU systems at least, while you won't have any guarantee that the en_US.UTF-8 locale (or any other locale) is enabled, usually the source for the locale is available along with the localedef command to compile it, so you should be able to compile that locale in a temporary directory as a normal user. For instance, you could define a us-sort script as:
#! /bin/sh -if l=$(locale -a | grep -ixm1 -e en_US.UTF-8 -xe en_US.utf8) && false; then
LC_ALL=$l exec sort "$@"
else
d=$(mktemp -d) || exit
trap 'rm -rf -- "$d"' INT TERM HUP EXIT localedef -i en_US -f UTF-8 -- "$d/en_US.UTF-8" &&
LOCPATH=$d LC_ALL=en_US.UTF-8 sort "$@"
fiThat would compile that locale in a temporary directory when not available and run sort in it. That would be slow though as compiling a locale is an expensive operation.
|
I have a German locale and need to sort US formatted numbers with commas as the thousands separator. Seems I don't override the locale properly?
sort --version
sort (GNU coreutils) 8.30Example:
echo "-4.00\n40.00\n4,000.00"|LC_ALL=en_US.utf8 sort -h
-4.00
4,000.00
40.00I actually don't expect it to change the order as 4,000 is the largest.
locale
LANG=de_DE.UTF-8
LANGUAGE=
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME=de_DE.utf8
LC_COLLATE="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_PAPER="de_DE.UTF-8"
LC_NAME="de_DE.UTF-8"
LC_ADDRESS="de_DE.UTF-8"
LC_TELEPHONE="de_DE.UTF-8"
LC_MEASUREMENT="de_DE.UTF-8"
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL= | How to sort comma-thousand separated numbers while on other locale |
A -k1,2 key specification specifies one key that starts at the start of the first column (includes the leading blanks as the default column separator is the transition from a non-blank to a blank) and ends at the end of the second column.
It's important to realise it's only one key. If you need two keys, you need two -k options. When sorting, sort will compare the "1 50" string with "1 1000" numerically. For a numerical comparison, those strings are converted to numbers by considering the leading part (ignoring leading blanks) that looks like a valid number. So we'll be comparing 1 and 1. As they are equal, sort will revert to the fall-back sorting to determine ties which is a lexical comparison of the whole line.
With -n -k1,1 -k2,2, sort compares "1" with "1" and then as it's a tie, considers the second key (" 50" vs " 1000"). As it's a numerical sort, -n -k1 -k2 would also work (where -k1 specifies a key that starts at the first field and ends at the end of the line, same as the full line).
|
With the following example data, both columns are numeric, but the second has different numbers of digits.
2 9
1 1000
1 50
3 0I want to sort based on both columns. Specifying them separately with the numeric flag, -n, produces the result I want.
sort -n -k1,1 -k2,2 num.data.txtgives
1 50
1 1000
2 9
3 0which is what I want.
However,
sort -n -k1,2 num.data.txtgives data that appear to be sorted alphabetically:
1 1000
1 50
2 9
3 0I know that sort -n -k1,2 num.data.txt is the same as sort -n num.data.txt (which gives the same result) when there are only two columns, but the data I'm actually working with has more columns.
Why is there this discrepancy between the two methods?
| Bash numeric sort gives different results when columns are selected simultaneously vs. together |
With *BSD's rs(1), assuming the input file is well-formed:
rs -C -t $( awk '/^$/ { print NR-1; exit }' file ) <file |
I am trying to write a script to change the following set of numbers
2.659980
3.256998
4.589778
2.1201502.223365
2.325566
2.121112
3.0201114.065112
0.221544
1.236665
1.395958to the following form (essentially making a matrix out of a list of numbers which are separated by an empty line)
2.659980 2.223365 4.065112
3.256998 2.325566 0.221544
4.589778 2.121112 1.236665
2.120150 3.020111 1.395958Can somebody help how to achieve this.
| Rearranging list of numbers to make a matrix |
Testing if a string is a number
You don't need regular expressions for that. Use a case statement to match the string against wildcard patterns: they're less powerful than regex, but sufficient here. See Why does my regular expression work in X but not in Y? if you need a summary of how wildcard patterns (globs) differ from regex syntax. This works in any sh implementation (even pre-POSIX Bourne).
case $var in
'' | *[!0123456789]*) echo >&2 "This is not a non-negative integer."; exit 2;;
[!0]*) echo >&2 "This is a positive integer. I like it.";;
0*[!0]*) echo >&2 "This is a positive integer written with a leading zero. I don't like it."; exit 2;;
*) echo >&2 "This number is zero. I don't like it."; exit 2;;
esacShell portability
Any Unix system has an implementation of sh. Any non-antique Unix or POSIX system has an sh implementation that (mostly) follows the POSIX specification. It's usually in /bin/sh, but there are a few commercial unices where /bin/sh is an antique Bourne shell and the modern POSIX sh is in /usr/posix/bin/sh or some such.
Use #!/usr/bin/env sh as a shebang line for practical portability if #!/bin/sh doesn't cut it for you.
[[ … ]] is not available in POSIX sh. It's available in ksh93, mksh, bash and zsh, but not in dash (a popular /bin/sh on Linux) or BusyBox (a popular /bin/sh on embedded Linux). Portable sh doesn't have regex matching built in, only wildcard matching. You can use grep, awk or sed to get regex matching on a POSIX system.
Quoting the regex for =~
Ksh93, bash and zsh have a regex matching operator =~ in [[ … ]] conditional expressions. They have slightly different quoting rules.
In bash ≥3.1, regex characters only have their special effect on the right of the =~ operator if they're unquoted. So [[ 100 =~ ^[1-9][0-9]*$ ]] is true but [[ 100 =~ '^[1-9][0-9]*$' ]] is false ([[ $x =~ '^[1-9][0-9]*$' ]] only matches strings that have ^[1-9][0-9]*$ as a substring).
In ksh 93u, the effect of quoting a character in a regex depends on the character: characters that are also wildcard characters must not be quoted, but characters that aren't can be in single or double quotes (but not preceded by a backslash). So [[ 100 =~ ^[1-9][0-9]*$ ]] is true, and so is [[ 100 =~ '^'[1-9][0-9]*'$' ]] but [[ 100 =~ '^[1-9][0-9]*$' ]] is false (it matches anything with the substring [1-9][0-9]*) and [[ 100 =~ ^[1-9][0-9]*\$ ]] is also false (it matches any string starting with a nonzero digit, then more digits and a $).
In zsh, any regex character can be quoted or not. Note that this means that to include a character literally, you need two levels of quoting, e.g. \\* or '\*' to match an asterisk. So both [[ 100 =~ ^[1-9][0-9]*$ ]] and [[ 100 =~ '^[1-9][0-9]*$' ]] are true.
I think putting the regex in a variable is the most reliable way not to depend on the shell's idiosyncrazies.
regex='…' # Use extended regular expression syntax here, with '\'' if you need a literal apostrophe
if [[ $string =~ $regex ]]; …ranges in regexp/wildcard bracket expressions
What ranges like [0-9] match depends on the implementation and locale. In general you can't expect it to match on 0123456789 only (though you should be able to assume it will match on at least those). If it's important you match on 0123456789 only, avoid ranges and name the characters individually.
|
That should be easy, just use [[ "$var" =~ '^[1-9][0-9]*$' ]]. But I don't get the behavior I expect excepted with zsh. I don't control the machine where the script will be run, so portability along reasonable shells (Solaris legacy Bourne shell is not reasonable) is an issue. Here are some tests:
% zsh --version
zsh 4.3.10 (x86_64-redhat-linux-gnu)
% zsh -c "[[ 100 =~ '^[1-9][0-9]*\$' ]] && echo OK"
OK
% sh --version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
% sh -c "[[ 100 =~ '^[1-9][0-9]*\$' ]] && echo OK"
% bash --version
GNU bash, version 4.2.53(1)-release (x86_64-unknown-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
% bash -c "[[ 100 =~ '^[1-9][0-9]*\$' ]] && echo OK"
% ksh --version
version sh (AT&T Research) 93u+ 2012-08-01
% ksh -c "[[ 100 =~ '^[1-9][0-9]*\$' ]] && echo OK"
% I seems to be missing something. What?
| Testing if a string is a number |
Using curl to fetch, lynx to parse, and awk to extract
Please don't parse XML/HTML with sed, grep, etc. HTML is context-free, but sed and friends are only regular.1
url='https://usa.visa.com/support/consumer/travel-support/exchange-rate-calculator.html/?fromCurr=USD&toCurr=EUR&fee=0&exchangedate=02/05/2017'
user_agent= 'Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0'curl -sA "${user_agent}" "${url}" \
| lynx -stdin -dump \
| awk '/1 EUR/{ print $4 }'You need some kind of HTML parser to reliably extract content. Here, I use lynx (a text-based web browser), but lighter alternatives exist.
Here, curl retrieves the page, then lynx parses it and dumps a textual representation. The /1 EUR/ causes awk to search for the string 1 EUR, finding only the line:
1 EUR = 1.079992 USDThen { print $4 } makes it print the fourth column, 1.079992.
Alternative solution without curl
Since my HTML parser of choice is lynx, curl is not necessary:
url='https://usa.visa.com/support/consumer/travel-support/exchange-rate-calculator.html/?fromCurr=USD&toCurr=EUR&fee=0&exchangedate=02/05/2017'
user_agent= 'Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0'lynx -useragent="${user_agent}" -dump "${url}" \
| awk '/1 EUR/{ print $4 }'1 A pcre (grep -P in some implementations) can describe some context-free or even context-sensitive stringsets, but not all of them.Edited 2017-12-23 to add a user-agent string (pretending to be Firefox), as the site currently blocks curl and lynx.
|
Given this:
<p>Currencies fluctuate every day. The rate shown is effective for transactions submitted to Visa on <strong>February 5, 2017</strong>, with a bank foreign transaction fee of <st <span><strong>1</strong> Euro = <strong>1.079992</strong> United States Dolla <p>The 'currency calculator' below gives you an indication of the cost of purchas <p>February 5, 2017</p><div class="clear-both"></div> <!-- removed clearboth- <p><strong>1 EUR = 1.079992 USD</strong></p> <div class="clear-both"></di <table width="290" border="0" cellspacing="0" cellpadding="3"> <a href="/content/VISA/US/en_us/home/support/consumer/travel-support/exchange e-calculator.html"> <button class="btn btn-default btn-xs"><span class="retur <p><p>This converter uses a single rate per day with respect to any two currencies. Rates displayed may not precisely reflect actual rate applied to transaction amount due to rounding differences, Rates apply to the date the transaction was processed by Visa; this may differ from the actual date of the transaction. Banks may or may not assess foreign transaction fees on cross-border transactions. Fees are applied at banks’ discretion. Please contact your bank for more information.</p>
I need to extract 1.079992
I'm using:
sed -E 's:.*(1\.[0-9\.]+).*:\1:g... which works ... but is there a more elegant way?
Alternatively, is there a way to get that value straight from curl?
(My full command is: curl 'https://usa.visa.com/support/consumer/travel-support/exchange-rate-calculator.html/?fromCurr=USD&toCurr=EUR&fee=0&exchangedate=02/05/2017' | grep '<p><strong>1' | sed -E 's:.*(1\.[0-9\\.]+).*:\1:g'
)
| Need to extract a number from HTML |
Using bash
To generate an even random number in hex:
$ printf '%x\n' $((2*$RANDOM))
d056Or:
$ hexVal=$(printf '%x\n' $((2*$RANDOM)))
$ echo $hexVal
f58aTo limit the output to smaller numbers, use modulo, %:
$ printf '%x\n' $(( 2*$RANDOM % 256 ))
4aUsing openssl
If you really want to use a looping solution with openssl:
while hexVal="$(openssl rand -hex 1)"
do
((0x$hexVal % 2 == 0)) && break
doneThe 0x signals that the number which follows is hex.
Rules for casting numbers in bash
From man bash:Constants with a leading 0 are interpreted as octal numbers. A
leading 0x or 0X denotes hexadecimal. Otherwise, numbers take the
form [base#]n, where the optional base is a decimal number
between 2 and 64 representing the arithmetic base, and n is a number
in that base. If base# is omitted, then base 10 is used. When
specifying n, the digits greater< than 9 are represented by the
lowercase letters, the uppercase letters, @, and _, in that order. If
base is less than or equal to 36, lowercase and uppercase letters may
be used interchangeably to represent numbers between 10 and 35. [Emphasis added] |
I am trying to write a script to get a random, even hex number. I have found the the openssl command has a convenient option for creating random hex numbers. Unfortunately, I need it to be even and my script has a type casting error somewhere. Bash thinks that my newly generated hex number is a string, so when I try to mod it by 2, the script fails. Here is what I have so far:
...
hexVal="$(openssl rand -hex 1)"
while [ `expr $hexVal % 2` -ne 0 ]
do
hexVal="$(openssl rand -hex 1)"
done
...I have tried various other combinations as well, to no avail. If someone could tell me what is wrong with my syntax, it would be greatly appreciated.
| Proper type casting in a shell script for use with while loop and modulus |
Using sed
This will print only lines that start with a positive number:
sed -n 's/^\([[:digit:]][^ ,]*\).*/\1/p'Combined with one of your pipelines, it would look like:
h5totxt hsli0.126.h5 | harminv -vt 0.1 -w 2-3 -a 0.9 -f 200 | sed -n 's/^\([[:digit:]][^ ,]*\).*/\1/p'How it works-n
This tells sed not to print any line unless we explicitly ask it to.
s/^\([[:digit:]][^ ,]*\).*/\1/p
This tells sed to look for lines that start with a positive number and print only that number.
In a regex, ^ matches only at the beginning of a line. [[:digit:]] matches any digit. [^ ,]* matches anything that follows that digit except a space or a comma. This is all grouped with parenthesis so that we can refer to the number later as \1. The whole line is then replaced with the number and, with the p option, we tell sed to print it.
One used to use [0-9] to match digits. With the advent of unicode fonts, that is no longer reliable. The expression [[:digit:]], however, is unicode safe.Alternative using extended regex
If you are using GNU sed (which is true of all linux systems), then the -r option can be used to get extended regular expressions. With extended regex, parens used for grouping do not need to be escaped:
sed -rn 's/^([[:digit:]][^ ,]*).*/\1/p'On OSX or other BSD systems, use -E in place of -r.
Using awk
This does the same but using awk:
awk -F, '/^[[:digit:]]/{print $1}'Combined with your pipeline:
h5totxt hsli0.126.h5 | harminv -vt 0.1 -w 2-3 -a 0.9 -f 200 | awk -F, '/^[[:digit:]]/{print $1}' |
I am running Ubuntu 14.04.1 LTS 64-bit with Bash 4.3.11(1)-release I have a program called harminv producing output as follows:
$ h5totxt hsli0.126.h5 | harminv -vt 0.1 -w 2-3 -a 0.9 -f 200
# harminv: 1902 inputs, dt = 0.1
frequency, decay constant, Q, amplitude, phase, error
# searching frequency range 0.31831 - 0.477465
# using 200 spectral basis functions, density 6.60692
-2.14026, 3.511909e-05, 30471.5, 0.922444, 1.26783, 1.383955e-06
2.14013, 2.052504e-05, 52134.7, 0.920264, -1.27977, 3.426846e-07
# harminv: 2/6 modes are ok: errs <= 1.000000e-01 and inf * 3.426846e-07
, amps >= 0, 9.000000e-01 * 0.922444, |Q| >= 10When the -v(verbose) option is omitted I have a much neater output as follows:
$ h5totxt hsli0.126.h5 | harminv -t 0.1 -w 2-3 -a 0.9 -f 200
frequency, decay constant, Q, amplitude, phase, error
-2.14026, 3.511909e-05, 30471.5, 0.922444, 1.26783, 1.383955e-06
2.14013, 2.052504e-05, 52134.7, 0.920264, -1.27977, 3.426846e-07I would like to be able to extract the positive numbers in the first column of the output in both cases, but have no idea on how to do it, except that I can use sed or awk. I would be grateful if someone points me in the right direction, and my aim is to record each positive number to make a plot against some other variable.
| How to extract the positive numbers in the first column from an output as in the question? |
How about this?
awk -F: '{printf "%.1f", ($1*60+$2)/60}' <<< 1:30 |
How can I use awk to convert a time value to a decimal value.
I have been using this command for the other way round (-> from):
awk '{printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< 1.5prints: 1:30how would I calculate this value 1:30 back to the decimal value 1.5?
| Awk - convert time value to decimal value |
Using awk:
awk '{ for(i = 1; i <= NF; i++) if($i > 50) { system("bash /other/shell/script.sh") } }' file.txtUsing a bash script:
#!/bin/bashfile=/path/to/file.txtfor num in $(<"$file"); do
if ((num>50)); then
bash /other/shell/script.sh
fi
doneThis will loop through each number in file.txt and check if that number is greater than 50, then run the script you provide. However this may be an issue or unnecessary; If there are multiple numbers in your text file greater than 50 should you run your script multiple times?
|
I have a text file (created by a script) which contains only numbers in a single line like "5 17 42 2 87 33". I want to check every number with 50 (example), and if any of those numbers is greater than 50, I want to run another shell script.
I'm using a vumeter program, my purpose is to run a sound recognition program if noise level is high. So I just want to determine a threshold.
| Compare the numbers in text file, if meets the condition run a shell script |
The following should work:
perl -pe 's/([0-9.e-]+)/$1 == 0 ? $1 : .001 + $1/ge' < input.txt > output.txt-p process the file line by line
s/patern/replacement/ is a substitution.
[0-9.e-]+ matches one or more of the given characters, i.e. the numbers
() remembers each number in $1
/g applies the substitution globally, i.e. as many times as needed for each line
/e evaluates the replacement as code
condition ? then : else is the "ternary operator": if the condition is true ($1 == 0, i.e. the remembered number equals 0), it returns the number, otherwise it adds .001 to it. |
Background:
(1) Here is a screen capture of a part of my ascii file (over 600Mb):(1.1) Here is a part of the code:
0, 0, 0, 0, 0, 0, 0, 0, 3.043678e-05, 3.661498e-05, 2.070347e-05,
2.47175e-05, 1.49877e-05, 3.031176e-05, 2.12128e-05, 2.817522e-05,
1.802658e-05, 7.192285e-06, 8.467806e-06, 2.047874e-05, 9.621194e-05,
4.467542e-05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.000421869,
0.0003081213, 0.0001938675, 8.70334e-05, 0.0002973858, 0.0003385935,
8.763598e-05, 2.743326e-05, 0, 0.0001043894, 3.409237e-05, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2.503832e-05, 1.433673e-05, 2.557402e-05,
3.081098e-05, 4.044465e-05, 2.480817e-05, 2.681778e-05, 1.533265e-05,
2.3156e-05, 3.193812e-05, 5.325314e-05, 1.639066e-05, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 2.259782e-05, 0.0004197799, 2.65868e-05, 0.0002485498,
3.485129e-05, 2.454055e-05, 0.0002096856, 0.0001910835, 1.969936e-05,
2.974743e-05, 8.983165e-05, 0.0004263787, 0.0004444561, 0.000241368, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,(2) Each red rectangle contains 74 elements.
(3) I want to add a number, e.g. 0.001, to each of the non-zero entries.My thought:
(1) I was told by my friend that /perl can help to finish this task but I am new to this programming script.
(2) I guess the strategy is to read each of the numbers and
(i) if it is a zero, then neglect it; or,
(ii) if it is non-zero, then add 0.001 to this number and replace this number.(3) My worry is that:
If /perl is able to read a number in scientific notation (i.e. 1.303637e-05 is indeed equal to 0.00001303637)?
| Add a number in a huge ASCII file |
You tagged this with bash, but mentioned awk, so I hope it's okay to use that:
$ awk -vn=0 'NR == 1 {print; next}
$0 != "" { k = $1; a[n] = $2; b[n] = $3; n++ }
$0 == "" { for (i = 0; i < n ; i++) {
printf "%s %s %s\n", k, a[i], b[n-i-1]; }
n=0; print }' < dataOn the very first line (NR == 1), we just print it and go on.
Then,
for nonempty lines, it loads the second and third fields to arrays a and b, and for empty lines, it goes through the arrays and prints a in order, and b in the inverse order and finally resets the counter n.
This assumes that (1) the point to mirror over is actually in the middle; (2) the first field is always the same within each block (as it is in your code); and (3) that there is an empty line after each block (though additional empty lines shouldn't matter. Add an empty line at the very end if you don't have one there).
|
Sample Input
file name
0.00 -1.0000 number1
0.00 -0.8000 number2
0.00 -0.6000 number3
0.00 -0.4000 number4
0.00 -0.2000 number5
0.00 0.0000 number6
0.00 0.2000 number7
0.00 0.4000 number8
0.00 0.6000 number9
0.00 0.8000 number10
0.00 1.0000 number110.02 -1.0000 number12
0.02 -0.8000 number13
0.02 -0.6000 number14
0.02 -0.4000 number15
0.02 -0.2000 number16
0.02 0.0000 number17
0.02 0.2000 number18
0.02 0.4000 number19
0.02 0.6000 number20
0.02 0.8000 number21
0.02 1.0000 number220.04 -1.0000 number23
0.04 -0.8000 number24
0.04 -0.6000 number25
0.04 -0.4000 number26
0.04 -0.2000 number27
0.04 0.0000 number28
0.04 0.2000 number29
0.04 0.4000 number30
0.04 0.6000 number31
0.04 0.8000 number32
0.04 1.0000 number33goal
(Referring to columns/fields in awk nomenclature, ie. $1 = field 1)
As you can see, there are 3 blocks of data above. Within each block, $1 equals a constant value. For each block of $1 = constant, I would like to exchange $3 symmetrically around where $2 = 0. The result would be the following desired output:
desired output
file name
0.00 -1.0000 number11
0.00 -0.8000 number10
0.00 -0.6000 number9
0.00 -0.4000 number8
0.00 -0.2000 number7
0.00 0.0000 number6
0.00 0.2000 number5
0.00 0.4000 number4
0.00 0.6000 number3
0.00 0.8000 number2
0.00 1.0000 number10.02 -1.0000 number22
0.02 -0.8000 number21
0.02 -0.6000 number20
0.02 -0.4000 number19
0.02 -0.2000 number18
0.02 0.0000 number17
0.02 0.2000 number16
0.02 0.4000 number15
0.02 0.6000 number14
0.02 0.8000 number13
0.02 1.0000 number120.04 -1.0000 number33
0.04 -0.8000 number32
0.04 -0.6000 number31
0.04 -0.4000 number30
0.04 -0.2000 number29
0.04 0.0000 number28
0.04 0.2000 number27
0.04 0.4000 number26
0.04 0.6000 number25
0.04 0.8000 number24
0.04 1.0000 number23background context
In my actual input, $1 continues in the sequence of {0.00..0.02..15.0}. Furthermore, within each block, $2 proceeds as {-14..0.2..14}. Therefore, in total there are 751 blocks, and each block alone consists of 141 lines (or 142 lines, including the additional title/empty line preceding each block).
So it would be helpful to have a script that can go through each of the 751 blocks one-by-one, and reflect that block's $3 arbitrary values symmetrically around the median line for that individual block (the 71st line in each block, or 72nd including the empty line above each block).
Thank you!
| How to reflect data points across their median line? |
You can read the value in the first column of the last line of the file like this:
#!/bin/bash
# This shell script is to tabulate and search for SR
next_n=$(($(tail -n1 /tmp/cases.txt 2>/dev/null | cut -f1) + 1))
read -p "Enter your SR number : " SR
echo -e "$next_n\t$SR\t$(date)" >> /tmp/cases.txtcut -f1 selects the first field of the line, fields being sequences of characters separated by tabs.
This also works when the file is empty or non-existent: next_n is set to 1 in this case.
|
I have the below script:
#!/bin/bash
# This shell script is to tabulate and search for SR
n=0 # Initial value for Sl.No
next_n=$[$n+1]
read -p "Enter your SR number : " SR
echo -e "$next_n\t$SR\t$(date)" >> /tmp/cases.txtWhen I run the script for the first time, I will enter SR = 123.
The output will be:
1 123 <date>I would like to run the script again, with a new value for SR = 456. I would like the output to be:
1 123 <date>
2 456 <date>However, my script always print column 1 as 1,1,1,1 because the n is getting re-initialized.
Is there a way to automatically increment column 1 by a factor of 1 every time the script is executed for a new SR value?
| Increment a column every time a script is executed |
version 3 use an awk file such as
function tenth(x) {
u = x ; if ( u < 0 ) u = -x ;
b=10 ;
a=b-2 ;
if ( u >= 10 ) {
d=int(log(u)/log(10)) ;
a=b-d-1 ;
}
printf "%*.*f",b,a,x ;
}
length($1) == 4 { print ; next ;}
NF == 1 { d=int(log($1)/log(10)) ;if (d> -1) d++ ; printf " %.7fE%+03d\n",$1/(10^d),d ;}
NF == 2 { printf " " ; tenth($1); printf " " ; tenth($2) ; printf "\n" ;}wherelengtht$1) == 4 { print ; next ;} will leave alone line where first field is four letter (that may be 1234 though)
function tenth(x) : define a function that adjust formating.
"%*.*f" string adjust size/precision of %f conversion. first * is replaced by b, second * is replaced by a.
int(log()/log(10)) give decimal log that adjust representation to your specific need? use it with
awk -f f.awk inputwhich give as a result
BALT 1
54.5000000 -161.070000
0.3958638E+01
0.1691576E-01
BALT 2
-9.20000000 67.1200000
0.4075299E+01
0.1951653E-01
BALT 3
43.8300000 142.520000
0.4089198E+01
0.5873400E-02
0.00000000 1.00000000
-3.14150000 2.71828183 |
I have a data file abc.txt in the following format:
BALT 1
54.500 -161.070
3.95863757
0.01691576
BARM 2
-9.200 67.120
4.07529868
0.01951653
BKSR 3
43.830 142.520
4.08919819
0.00587340I need to convert it in the format:
BALT 1
54.5000000 -161.070000
0.3958637E+01
0.1691576E-01
BARM 2
-9.20000000 67.1200000
0.4075298E+01
0.1951653E-01
BKSR 3
43.8300000 142.520000
0.4089198E+01
0.5873400E-02The total spaces taken by the numbers in 2nd line should be 10 excluding -ve sign (e.g. 54.500 as 54.5000000 and -161.070 as -161.070000). The spaces for 3rd and 4rth line should be 13 (e.g. 3.95863757 as 0.3958637E+01). And BALT or BARM are variables, it may be another words with four characters.Thank you. | Formatting from Decimal to Exponential |
The seq utility is one way to generate numbers:
for start in $(seq 200000 500 209000); do mkdir "${start}"-"$((start + 499))"; doneThe syntax is seq start increment end.
|
[EDITED to reflect answers below]
I am looking for a way to create blocks of folders / directories from the command line or a script that will generate a top level folder titled "200000-209999" and then inside of that folder, sub-folders named thusly:
200000-200499
200500-200999
201000-201499
... etc ...
... etc ...
208500-208999
209000-209499
209500-209999The naming is spaced like you see, and then I would want to set up the next batch of top-level/sub-folders, "210000-219999," "220000-229999," etc.
[EDIT]
I came up with the following script based on the answers below to accomplish exactly what I am looking for. My additions may not be elegant scripting, so if it can be improved upon, let me know.
#!/bin/bash
#
# mkfolders.sh
#
# Ask user for starting range of job #'s and create the subsequent
# folder hiearchy to contain them all.
#
###clear
read -p 'Starting Job# in range: ' jobnum
mkdir "${jobnum}"-"$((jobnum + 9999))"
for start in $(seq $jobnum 500 $((jobnum+9999))); do mkdir "${jobnum}"-"$((jobnum + 9999))"/"${start}"-"$((start + 499))"; done
echo
echo Done!
echo | Creating numerous ranges or blocks of folders/directories? |
With sed
hdp-select | sed '/^hadoop-client - /!d;s///;s/-.*//;s/\.//g' |
we have this CLI syntax
hdp-select | grep hadoop-client
hadoop-client - 2.6.4.0-91the final goal is to get the number as example:
2640we capture the last number & remove the - and remove the .
so I did
hdp-select | grep hadoop-client | awk '{print $3}' | sed s'/-/ /g' | awk '{print $1}' | sed s'/\.//g'
2640but this is ugly solution
I will happy to know other elegant solution
| Parsing a Hadoop version string with Bash |
This is not doing what you think it does, it works only by accident:
[^years]+It means, match any character except y, e, a, r and s at least once.
Also, instead of Look-behind assertion, I would use keep-out. It has the benefit that it can be of variable length, then you can easily match both Age and Height.
(Age|Height)=\KThen, instead of making a negative match, use a positive one, matching only numbers:
grep -Po '(Age|Height)=\K\d+'--
$ echo "Age=22 and Height=6" | grep -Po '(Age|Height)=\K\d+'
22
6 |
I have a file with following type of expression in every line "Age=22 years and Height=6 feet", I want to extract Age and height numbers only.
I have tried
grep -oP '(?<=Age=)[^years]+' $f | awk '{ printf "%d \n",$1; }and get age correctly. How can I get both Age and height. When I try nested pattern match i get height only.
This is the pattern I've tried
grep -oP '(?<=Age=)[^years]+.+(?<=Height=)[^feet]+' $f | awk '{ printf "%d \n",$1; } | Extracting numbers between two string patterns |
cat numbers.txt | awk '{sum += $1; if (NR % 5 == 0) {print sum; sum=0}} END {if (NR % 5 != 0) print sum}'sum starts as 0 in awk. Every fifth line it prints the current sum of numbers out, then resets the sum to zero and goes over the next five lines. The END at the end handles the edge case of the number of lines in the file not being a multiple of five, eg if there's 18 lines in the file it will print the sum of the last 3 lines. It also handles the edge case of not printing a unwanted zero when the number of lines is a multiple of five.
|
I am writing a parser, and have to do some fancy stuff. I am trying not to use python, but I might have to at this point.
Given an STDOUT that looks like this:
1
0
2
3
0
0
1
0
0
2
0
3
0
4
0
5
0
2
.
.
.For 100,000 lines. What I need to do is add up every 5, like so:
1 - start
0 |
2 | - 6
3 |
0 - end
0 - start
1 |
0 | - 3
0 |
2 - end
0 - start
3 |
0 | - 7
4 |
0 - end
5
0
2
.
.
.The -, |, start, end, are all for visual representation, I just need it in a column list:
6
3
7
.
.
.I currently have a method of doing this by using an incremental head -n $i and tail -n 5 to cut 5 rows out of the list, then I use paste -sd+ - | bc to add up all the values. But this is wayyyy to slow because there are 100,000 lines.
How can I do this better?
| Adding up every 5 lines of integers |
The issue in your code,
awk 'BEGIN{min=9}{for(i=2;i<=2;i++){min=(min<$i)?min:$i}print min;exit}' file.dat... is that you immediately exit after processing the first line of input. Your middle block there need to be triggered for every line. Then, in an END block, you can print the values that you have found. You do this in another code snippet:
awk '{if ($1 > max) max=$1}END{print max}'Another issue is that you initialize min with a magic number (9 in the first code that I quoted, and 0 in the second piece; variables that are not explicitly initialized has the value 0 if you use them in calculations). If this magic number does not fall within the range of numbers in the actual data, then the calculated min and/or max values will be wrong. It is better to initialize both min and max to some value found in the data.
To keep track of both min and max values, you need two variables, and both of these needs to be checked against the data in the file for every line, to see whether they need updating.
As awk supports arrays, it would be natural to use arrays for min and max, with one array element per column. This is what I have done in the code below.Generalized to any number of columns:
NF == 0 {
# Skip any line that does not have data
next
}!initialized {
# Initialize the max and min for each column from the
# data on the first line of input that has data.
# Then immediately skip to next line. nf = NF for (i = 1; i <= nf; ++i)
max[i] = min[i] = $i initialized = 1
next
}{
# Loop over the columns to see if the max and/or min
# values need updating. for (i = 1; i <= nf; ++i) {
if (max[i] < $i) max[i] = $i
if (min[i] > $i) min[i] = $i
}
}END {
# Output max and min values for each column. for (i = 1; i <= nf; ++i)
printf("Column %d: min=%s, max=%s\n", i, min[i], max[i])
}Given this script and the data in the question:
$ awk -f script.awk file
Column 1: min=0.0000, max=0.4916
Column 2: min=-24.1254, max=-23.4334The condition NF == 0 for the first block (which is executed for all lines) is to ensure that we skip blank lines. The test means "if there are zero fields (columns) of data on this line". The variable initialized will be zero from the start (logically false), but will be set to one (logically true) as soon as the first line that has data is read.
The nf variable is initialized to NF (the number of fields) on the line that we initialize the min and max values from. This is so that the output in the END block works even if the last line has zero fields.
|
Dear all I have a big data file lets say file.dat, it contains two columns
e.g file.dat (showing few rows)
0.0000 -23.4334
0.0289 -23.4760
0.0578 -23.5187
0.0867 -23.5616
0.1157 -23.6045
0.1446 -23.6473
0.1735 -23.6900
0.2024 -23.7324
0.2313 -23.7745
0.2602 -23.8162
0.2892 -23.8574
0.3181 -23.8980
0.3470 -23.9379
0.3759 -23.9772
0.4048 -24.0156
0.4337 -24.0532
0.4627 -24.0898
0.4916 -24.1254
note: data file has a blank line at the end of the fileExpected results
I want to find/extract the maximum and minimum from both the column
e.g
column-1
max - 0.4916
min - 0.0000similarly
column-2
max - -23.4334
min - -24.1254Incomplete solution (not working for column-2)
For Column-1
awk 'BEGIN{min=9}{for(i=1;i<=1;i++){min=(min<$i)?min:$i}print min;exit}' file.dat
0.0000cat file.dat | awk '{if ($1 > max) max=$1}END{print max}'
0.4916for column-2
awk 'BEGIN{min=9}{for(i=2;i<=2;i++){min=(min<$i)?min:$i}print min;exit}' file.dat
-23.4334cat file.dat | awk '{if ($2 > max) max=$2}END{print max}'
**no output showing**Problem
Please help me to find the min and max value from column-2
note: data file has a blank line at the end of the file
| how to extract maximum and minimum value from column 1 and column 2 |
Modified sample based on comments
$ cat ip.txt
7 60,72,96
7 60
3 601
2 60,72,962
5 60,3
43 60
3 52360$ grep -oP '^\h*\K\d+(?=\h+60\h*$)' ip.txt
7
43-oP print only matching portion, uses PCRE
^\h*\K ignore starting blank characters of line
\d+ the number to be printed
(?=\h+60\h*$) only if it is followed by blank characters, then 60 and then optional blanks until end of lineOr, just use awk for field based processing ;)
|
I have this file content:
63 41,3,11,12
1 31,60,72,96
7 41,3,31,14,15,68,59,60
7 60,72,96
7 60
1 41,3,31,31,14,15,68,59,60
60 41,3,115,12,13,66,96
1 41,3,11,12,13,66,96 I need to grep the '7' before the '60' (where the '60' is not followed by '72,96').
| Difficult grep. How can I isolate this number? |
awk '{
count[$1]++
min[$1]=(!($1 in min) || $2<min[$1]) ? $2 : min[$1]
max[$1]=(!($1 in max) || $2>max[$1]) ? $2 : max[$1]
}
END {
print "Name","Count","Minimum","Maximum"
print "----","-----","-------","-------"
for(i in count) print i,count[i],min[i],max[i]
}' file | column -tThe logic of the minimum array value assignment is:
If the name of the first field doesn't exist in the array (!($1 in min)) or (||) the second field is smaller than the current array value ($2<min[$1]), then (?) assign the new value $2, else (:) assign the old value min[$1].
The | column -t is used to pretty-print the result as table. You can remove it if you don't need it.
Output:
Name Count Minimum Maximum
---- ----- ------- -------
PION+ 2 0.167848 1.374297
PION- 3 0.215176 22.716532
NEUTRON 2 8.043279 20.900103 |
I need to find the max and min of each entry in my file:
#subset of my file
NEUTRON 20.900103
PION- 0.215176
PION- 22.716532
NEUTRON 8.043279
PION+ 1.374297
PION- 0.313350
PION+ 0.167848How could I loop through the file and find the min and max for each Name when there are multiple entry names. I have used awk already to count each entry and have no repeats, but each repeat of each name contains a number with it and that number is what im trying to filter out the max and min of each entry.
ex output from whole file:
Name Count Minimum Maximum
-------- ----- --------- ---------
KAON- 1 5.489958 5.489958
NEUTRON 2 8.043279 20.900103
PHOTON 10 0.034664 1.897264
PION- 5 0.192247 22.716532
PION+ 7 0.167848 7.631051
PROTON 1 1.160216 1.160216 | How to calculate max and min for each unique entry |
Using sed and bc:
date +%z | sed -E 's/^([+-])(..)(..)/scale=2;0\1(\2 + \3\/60)/' | bcThis will give you 2.00 back in the timezone I'm in (+0200).
With strange/unusual timezones:
$ echo '+0245' | sed -E 's/^([+-])(..)(..)/scale=2;0\1(\2 + \3\/60)/' | bc
2.75$ echo '-0245' | sed -E 's/^([+-])(..)(..)/scale=2;0\1(\2 + \3\/60)/' | bc
-2.75The sed expression will turn the timezone into a "bc script". For the timezone +HHMM, the script will be
scale=2;0+(HH + MM/60)For -HHMM it will be
scale=2;0-(HH + MM/60)The zero is in there because my bc does not understand unary +.
If you only ever going to deal with full hour timezones, then you may use
date +%z | sed -E 's/^([+-])(..)../0\1\2/' | bcwhich will deal you integers.
|
I am doing some timezone calculations in bash. I'm getting some unexpected values when converting the timezone offset hour output to an integer to do some additional calculations.
Partial script:
offset=$(date +%z)
echo "$offset"
hours=$(( offset ))
echo "$hours"Output
-0400
-256Desired Output (I accidentally omitted the need to divide by 100 for the final output)
-0400
-4I think that the arithmetic is getting evaluated as octal. How can I evaluate the output from date +%z as decimal?
| Convert timezone offset to integer |
This could be solved by converting the Cartesian coordinates into polar coordinates and sorting by the angle.
We can compute the angle as atan2(y,x).
We may sort the original data using this computed number by applying a Schwartzian transform where the angle is used as the temporary sorting key:
awk -v OFS='\t' '{ print atan2($2,$1), $0 }' Sphere_ISOTEST_data.txt |
LC_ALL=C sort -g | cut -f 2- >sorted.txtThe awk program computes the atan2() value from the values in the file and prepends the original lines with this value for each line, using a tab character as delimiter. The sort utility then sorts the data and cut is used to remove the temporary sorting key.
Note that I'm using sort -g, which is non-standard. The -g option, when implemented, usually enables a "general numeric sort", which we will need to use as some of the atan2() values will be in scientific notation due to being very small. We also need to use the POSIX locale ("C") for sort to read and sort the numbers correctly. We could obviously work around this by modifying the output format of the atan2() values as we print them, but this at least shows the general idea.
The result is written to sorted.txt.GNUTERM=png gnuplot -e 'set size square; pl "sorted.txt" w l' >sorted.png |
Here I have a data which I would to plot with line using Gnuplot.
Using the code
pl 'Sphere_ISOTEST_data.txt' w pI get below figurebut, using
pl 'Sphere_ISOTEST_data.txt' w lI get the following:Can anyone suggest as to how to sort the data such that I can plot w l and get only the circumference of the circle.
| Sort the data to plot a circle |
sed -e '/^91[0-9]\{10\}$/s/^91//' < input > output(or use filename if you prefer)
|
sed command to delete leading 91 if number is 12 digit
My file is
919876543210
917894561230
9194561230Need output
9876543210
7894561230
9194561230 | sed command to delete leading 91 if number is 12 digit |
I suspect you have a locale using a comma as decimal separator. This should fix that issue:
awk -F' ' '{printf "%-12s%-12s\n", $11, $9}' foo.dat | LC_ALL=C sort -g |
Consider the following foo.dat file with 11 columns:
893 1 754 946 193 96 96 293.164 293.164 109.115 70.8852
894 1 755 946 192 95 96 291.892 292.219 108.994 70.821
895 1 755 947 193 95 97 290.947 291.606 109.058 70.5709
896 1 755 947 193 95 97 290.002 290.663 109.122 70.5053
897 1 755 948 194 95 98 289.057 290.057 109.187 70.2532
898 1 754 949 196 96 99 288.444 289.456 109.44 70
899 1 754 950 197 96 100 287.501 288.862 109.506 69.7458
900 1 754 949 196 96 99 286.559 287.578 109.573 69.8637I'd like to filter the columns 11 and 9 and print only these on a file, but in ascending order on 1st new column, that is, after printing 11 and 9 columns, sort the output by numerical rule.
I tried
awk -F' ' '{printf "%-12s%-12s\n", $11, $9}' foo.dat | sort -g but the output is strange around 70. It is
70.2532 290.057
70 289.456
70.5053 290.663Why 70 is not before 70.2532? Looks like . is being ignored.
| Sorting float number and awk |
Depending on the type of data, sorting may take a long time.
We can get the result without sorting (but using more memory) like this:
awk 'a[$1]<$2{a[$1]=$2}END{for(i in a){print(i,a[i])}}' infile |
I have a tabular file in which the first column has IDs and the second one has numeric values. I need to generate a file that contains only the line with the largest score for each ID.
So, I want to take this:
ES.001 2.33
ES.001 1.39
ES.001 119.55
ES.001 14.55
ES.073 0.35
ES.073 17.95
ES.140 1.14
ES.140 53.88
ES.140 18.28
ES.178 150.27And generate this:
ES.001 119.55
ES.073 17.95
ES.140 53.88
ES.178 150.27Is there a way of doing this from a bash command-line?
| Filtering the line with the largest value for a given ID |
I would use awk. Assuming that the data is formatted exactly as per your sample data, the following will produce the desired output:
awk -v MAX=0 '{ if(NR>1 && $3>MAX){WANT1=$1; WANT2=$2; MAX=$3}} END{print WANT1, WANT2}' infile > outfile |
In Unix, I am trying to find a command that would find the maximum value in Column3 and print the corresponding values from Column2 and Column1 (but not from Column3) in a new file.
Column1 Column2 Column3
A 1 25
B 2 6
C 3 2
D 4 16
E 5 10What should be the Unix command? Should I use grep or awk or datamash?
| Find the maximum value of Column3 and print the values of Column1 and 2 only |
Using awk:
dmh -q 12 | awk 'NR > 1 { sum += $5 } END {print sum}'This will sum all the values in column 5 and then print the total.
To store this in a variable use command substitution:
var=$(dmh -q 12 | awk 'NR > 1 { sum += $5 } END {print sum}') |
im working on sun10 Solaris os,i have a process that returns table as by using this command dmh -q 12 the below:
*PROFILE PRIORITY COMM_TYPE QID # OF MSGS ATTRIBUTES/VALUES*
13 999 DC 24 3 32 1865
13 999 DC 94 1 32 1665
13 999 DC 157 0 32 1961
13 999 DC 188 2 32 1784
13 999 DC 293 0 32 1625
13 999 DC 294 31 32 1950
13 999 DC 713 0 32 1601
13 999 DC 838 0 32 1607
13 999 DC 1458 0 32 1855here im trying to get the total count of messages and store it in the variable
I have tried this but it doesn't work with me:
dmh -q 12 | grep -v'# OF MSGS' | wc -l the expected result should be 37
| get count with grep |
See https://mywiki.wooledge.org/ParsingLs and https://mywiki.wooledge.org/Quotes and then do this instead:
$ find . -mtime -5 -type f -printf "cp -p '%p' '%Ad'\n"
cp -p './bbbb.csv' '09'
cp -p './cccc.csv' '10'
cp -p './out1.txt' '09'
cp -p './out2.txt' '05' |
I have the following command:
find . -mtime -5 -type f -exec ls -ltr {} \; | awk '{print "cp -p "$9" "$7}'the output is like:
cp -p ./18587_96xxdata.txt 10
cp -p ./16947_96xxdata.txt 8
cp -p ./32721_96xxdata.txt 9
cp -p ./32343_96xxdata.txt 9
cp -p ./32984_96xxdata.txt 10But I want the last part of the output to be always 2 digits, such as:
cp -p ./18587_96xxdata.txt 10
cp -p ./16947_96xxdata.txt 08
cp -p ./32721_96xxdata.txt 09
cp -p ./32343_96xxdata.txt 09
cp -p ./32984_96xxdata.txt 10I tried different variations of %02d, but not getting what I want.
Here's one I tried:
find . -mtime -5 -type f -exec ls -ltr {} \; | awk '{print "cp -p " $9 " "("%02d", $7)}' Should I be using printf, and if so, how exactly?
Thank you!
| How to always print an output with certain number of digits using AWK |
sort -k2,2gr input.txt > output.txt | I want to sort my input file on the basis of the 2nd column in descending order. I have used the following command for this:
sort -k2,2nr input.txt > output.txtHowever, after running the command i am getting the this output:
ENSG00000273451 2.46335345019054e-05
ENSG00000181374 1.05269640687115e-05
ENSG00000182150 1.01285751909085e-05
ENSG00000283697 1
ENSG00000283463 0.932309672567822
ENSG00000157916 0.845034568173369
ENSG00000268983 0.835243646448564
ENSG00000227251 0.834326032498057
ENSG00000140157 0.833074569385573
ENSG00000134882 0.832993129338477And the expected output should be
ENSG00000283697 1
ENSG00000283463 0.932309673
ENSG00000157916 0.845034568
ENSG00000268983 0.835243646
ENSG00000227251 0.834326032
ENSG00000140157 0.833074569
ENSG00000134882 0.832993129
ENSG00000273451 2.46E-05
ENSG00000181374 1.05E-05
ENSG00000182150 1.01E-05 | How to sort the second column in descending order? [duplicate] |
Given your shell provided "process substitution" (likes recent bashes), try
diff <(tr '-' ' ' <file1) <(tr '-' ' '<file2)
1,2c1,2
< 21 0.0081318 0.0000000 0.0000000 0.0000000 0.0138079
< 22 0.0000000 0.0000000 0.0000000 0.1156119 0.0000000
---
> 21 0.0081318 0.0000000 0.0000000 0.0000000 0.0032533
> 22 0.0000000 0.0000000 0.0000000 0.0250637 0.0000000 |
I have two large files consisting mainly of numbers in matrix form, and I'd like to use diff (or similar command) to compare these files and determine which numbers are different.
Unfortunately, quite a lot of these numbers differ only by sign, and I'm not interested in those differences. I only care when two numbers are different in magnitude. (i.e. I want 0.523 vs. 0.623, but NOT 0.523 vs. -0.523)
Is it possible to make diff ignore the sign and only print numbers that are different in magnitude?
EDIT: Some input examples, as requested:
File 1:
21 -0.0081318 0.0000000 0.0000000 0.0000000 -0.0138079
22 0.0000000 0.0000000 0.0000000 0.1156119 0.0000000
23 0.0000000 0.0047536 0.0000000 0.0000000 0.0000000File 2:
21 -0.0081318 0.0000000 0.0000000 0.0000000 0.0032533
22 0.0000000 0.0000000 0.0000000 -0.0250637 0.0000000
23 0.0000000 -0.0047536 0.0000000 0.0000000 0.0000000Assuming my files are formated mostly like this (except much, MUCH longer), I want to print out the differences, but ignore such differences when they're only in sign. For example, I don't care for 0.0047536 vs. -0.0047536, but I do want to print 0.1156119 vs. -0.0250637.
| How to ignore differences between negative signs of numbers in the diff command? |
Pure awk:
$ awk -F'[, ]' 'NR==FNR{n[$2]=$1;next}{m[$3]+=n[$1]}
END{for(i in m){print i " " m[i]}}' \
file1 file2
degree1 2
degree2 5Or you can put it into a script like this:#!/usr/bin/awk -f
BEGIN {
FS="[, ]"
}
{
if (NR == FNR) {
n[$2] = $1;
next;
} else {
m[$3] += n[$1];
}
}
END {
for (i in m) {
print i " " m[i];
}
}First set field separator to both comma and space (that is the BEGIN block or the -F command line option.
Then, when parsing the first file (the FNR == NR idiom) put number of connections for a user into array indexed by user name. When parsing the following file(s), add the number of connections for each user into the array indexed by user group.
Finally (the END block) scan the whole array and print the key, value pairs.
|
I have a file file1 with the amount of times a user shows up in the files, something like this:
4 userC
2 userA
1 userBand I have another file file2 with users and other info like:
userC, degree2
userA, degree1
userB, degree2and I want an output where it shows the amount of times user shows up, for each degree:
5 degree2
2 degree1 | Summing by common strings in different files |
There's probably a more elegant way using Perl's pack and unpack, but using a combination of string manipulation and oct:
$ perl -pe 's/\b[01]+\b/oct "0b" . $&/ge' file
ADD $05 $05 $05
SUBI $06 $06 3
MUL $07 $07 $07
JSR 29See Converting Binary, Octal, and Hexadecimal Numbers
|
I want to process with a bash script a text file that looks like this:
ADD $05 $05 $05
SUBI $06 $06 00011
MUL $07 $07 $07
JSR 011101taking the binary numbers (that are always longer than 4 bits) and converting them into their decimal representation.
For the previous example, this is the file I want to end up with:
ADD $05 $05 $05
SUBI $06 $06 3
MUL $07 $07 $07
JSR 29I have been exploring tr and sed, but I think they don't let me work with the matched pattern (to convert it) before the replacement. What approach can I take?
EDIT:
with the suggestion of @DopeGothi, and given that I have at most one binary number per line, I can create a temporary file with all the decimal versions of the binary numbers. The issue is that now I need to intercalate them:
Every time I find a binary number in the first file, I replace with the corresponding number in the file with decimals.
| Convert binary strings into decimal |
It does give you the sum of every column, but in one column (provided that the data is whitespace-separated):
$ cat data.in
1 2
3 4
5 6$ awk '{ for (i=1;i<=NF;i++) sum[i]+=$i } END { for (i in sum) print sum[i] }' data.in
12
9 So it's a matter of not outputting a newline between each sum.
$ awk '{ for (i=1;i<=NF;i++) sum[i]+=$i } END { for (i in sum) printf("%d ", sum[i]); printf("\n") }' data.in
12 9The printf() function takes a format string. The %d is the formatting string for an integer (use %f for floats), and the following space will also be outputted after the integer. We then finish with outputting an explicit newline after the loop.
Another way to solve it, using the ORS ("Output Record Separator") variable:
$ awk 'BEGIN { ORS=" " } { for (i=1;i<=NF;i++) sum[i]+=$i } END { for (i in sum) print sum[i]; printf("\n") }' data.in
12 9Also see Dave Thompson's insightful warning in comments below about the ordering of keys in Awk's associative arrays (which are not guaranteed to be sorted).
|
Hi I need to get the sum of each and every column in a file, needs to be flexible to as many columns as are in any given file
currently I use:
awk '{for (i=1;i<=NF;i++) sum[i]+=$i;}; END{for (i in sum) print sum[i];}'This however, only gives me the sum of the first column, which i could obviously loop, but i would prefer something simpler.
Any ideas/answers?
| Sum of each column in a file, needs to be flexible to as many columns as are in the file |
Using the LC_NUMERIC='en_US.UTF-8' locale (that has supporting the comma as the thousands separator formatting for the numbers) and using the sprintf's %\047d (%\047d is just another type of writing single quote %'d using the Octal Escape Sequences, or you can write it %'\''d too) format modifier to force convert the floating point numbers into integers plus use the comma as the thousands separator and then we used the gsub() function to convert the commas to single quotes.
$ LC_NUMERIC='en_US.UTF-8' awk 'BEGIN{ FS=OFS=";" }
{ $3=sprintf ("%\047d", $3); gsub(",", "\047", $3) }1' infile
A;B;1'234
C;D;789 |
I have semicolon delimited data, where third column is decimal number:
A;B;1234.56
C;D;789.23I need to make 2 changes to the format of the number:remove numbers after the decimal point
add "thousand separator" to the numberso 1234.56 would become 1'234
I was able to do the first part, but don't know how to add the thousand separator:
printf "A;B;1234.56\nC;D;789.23\n" | awk -F';' '{gsub(/\.../,"",$3) ; printf "%s,%s,%s\n", $1, $2, $3 }'how can I do that ?
| Reformat number in specific column |
Without your big number issue taken into account, I would write the awk program something like this:
BEGIN {
FS = "\\|~\\^"
OFS= "|~^"
}$1 == "H" {
header = $0
}$1 == "R" {
name = $3
sub("T.*", "", name) sum[name] += $4
cnt[name] += 1 if (cnt[name] == 1)
print header >name ".txt" print >name ".txt"
}$1 == "T" {
for (name in sum)
print $1, $2, cnt[name], $4, sum[name] >name ".txt"
}For convenience, I set the output field separator, OFS, to |~^. This allows me to not worry about inserting it between fields that I output. The field separator for input, FS, is set to a regular expression that matches that string.
I then have three main blocks of code:One for parsing the H line. It is assumed that there only is one of these and that it occurs at the start. This simply stores the header line in the variable header.One for parsing the R lines. Each record contains the date that should be used as the output file name in the 3rd field. This is parsed out in the same manner as you do it. The sum for that date is accumulated, and a counter is incremented too.
If the counter is one, i.e. if this is the first time we see that particular date, we write the header to the output file associated with that date. Then we write the current record to the file.The last block parses the T line. It is assumed that there only is one of these and that it occurs at the end. This simply outputs the accumulated sums and counts for each separate date to the file associated with that date, together with some data from the original T line.To support arbitrary large numbers (you say elsewhere that you have numbers that would require in excess of 100 bits to store, and that would therefore overflow an integer in awk), we employ the arbitrary precision calculator bc as a "coprocess" (a sort of a computational service). The line saying sum[name] += $4 is replaced by
if (sum[name] == "") sum[name] = 0
printf "%s + %s\n", sum[name], $4 |& "bc"
"bc" |& getline sum[name]This requires GNU awk (available for most Unix systems, in one way or another).
What this does is to first initialize the sum for the current date to zero, if there is no sum for this date yet. We do this because we need to supply a 0 to bc for the initial sum.
We then print the expression that bc should compute using the GNU awk-specific |& pipe to write to a coprocess. The bc utility, which will be started and running in parallel with our awk script, does the computation, and the following getline reads the output from bc from another |& pipe, directly into sum[name].
As far as I understand, GNU awk will not spawn a separate bc process for each summation, but will maintain a single bc process running as a coprocess. This would thus be slower than doing the computation inside awk natively, but will be much faster than spawning a separate bc for each and every summation.
For the given data, the following two files would be created:
$ cat 2019-03-05.txt
H|~^20200425|~^abcd|~^sum
R|~^abc|~^2019-03-05T12:33:52.27|~^105603.042|~^2018-10-23T12:33:52.27|~^aus
R|~^abc|~^2019-03-05T12:33:52.27|~^2054.026|~^2018-10-24T12:33:52.27|~^usa
R|~^abc|~^2019-03-05T12:33:52.27|~^30.00|~^2018-08-05T12:33:52.27|~^ddd
R|~^abc|~^2019-03-05T12:33:52.27|~^20.00|~^2018-07-23T12:33:52.27|~^audg
T|~^20200425|~^4|~^xxx|~^107707.068$ cat 2019-03-06.txt
H|~^20200425|~^abcd|~^sum
R|~^abc|~^2019-03-06T12:33:52.27|~^123562388.23456|~^2018-04-12T12:33:52.27|~^hhh
R|~^abc|~^2019-03-06T12:33:52.27|~^10.00|~^2018-09-11T12:33:52.27|~^virginia
R|~^abc|~^2019-03-06T12:33:52.27|~^15.03|~^2018-10-23T12:33:52.27|~^jjj
R|~^abc|~^2019-03-06T12:33:52.27|~^10.04|~^2018-04-08T12:33:52.27|~^jj
T|~^20200425|~^4|~^xxx|~^123562423.30456 |
I have a below input file which I need to split into multiple files based on the date in 3rd column. Basically all the same dated transactions should be splitted into particular dated file. Post splitting I need to create a header and Trailer.
Trailer should contain the count of the records and sum of the amounts in 4th column(Sum of the amount for that date). In this case as I stated above I have very large numbers in the amount How can i integrate bc in the below code.
Input File
H|~^20200425|~^abcd|~^sum
R|~^abc|~^2019-03-06T12:33:52.27|~^123562388.23456|~^2018-04-12T12:33:52.27|~^hhh
R|~^abc|~^2019-03-05T12:33:52.27|~^105603.042|~^2018-10-23T12:33:52.27|~^aus
R|~^abc|~^2019-03-05T12:33:52.27|~^2054.026|~^2018-10-24T12:33:52.27|~^usa
R|~^abc|~^2019-03-06T12:33:52.27|~^10.00|~^2018-09-11T12:33:52.27|~^virginia
R|~^abc|~^2019-03-05T12:33:52.27|~^30.00|~^2018-08-05T12:33:52.27|~^ddd
R|~^abc|~^2019-03-06T12:33:52.27|~^15.03|~^2018-10-23T12:33:52.27|~^jjj
R|~^abc|~^2019-03-06T12:33:52.27|~^10.04|~^2018-04-08T12:33:52.27|~^jj
R|~^abc|~^2019-03-05T12:33:52.27|~^20.00|~^2018-07-23T12:33:52.27|~^audg
T|~^20200425|~^8|~^xxx|~^123670130.37256Output file
20190305.txt
H|~^20200425|~^abcd|~^sum
R|~^abc|~^2019-03-05T12:33:52.27|~^105603.042|~^2018-10-23T12:33:52.27|~^aus
R|~^abc|~^2019-03-05T12:33:52.27|~^2054.026|~^2018-10-24T12:33:52.27|~^usa
R|~^abc|~^2019-03-05T12:33:52.27|~^30.00|~^2018-08-05T12:33:52.27|~^ddd
R|~^abc|~^2019-03-05T12:33:52.27|~^20.00|~^2018-07-23T12:33:52.27|~^audg
T|~^20200425|~^4|~^xxx|~^107707.068Output file
20190306.txt
H|~^20200425|~^abcd|~^sum
R|~^abc|~^2019-03-06T12:33:52.27|~^123562388.23456|~^2018-04-12T12:33:52.27|~^hhh
R|~^abc|~^2019-03-06T12:33:52.27|~^10.00|~^2018-09-11T12:33:52.27|~^virginia
R|~^abc|~^2019-03-06T12:33:52.27|~^15.03|~^2018-10-23T12:33:52.27|~^jjj
R|~^abc|~^2019-03-06T12:33:52.27|~^10.04|~^2018-04-08T12:33:52.27|~^jj
T|~^20200425|~^4|~^xxx|~^123562423.30456Code I'm using(PS:Suggested by one of our community member)
Here's an awk solution:
awk -F'\\|~\\^' '{
if($1=="H"){
head=$0
}
else if($1=="T"){
foot=$1"|~^"$2
foot4=$4
}
else{
date=$3;
sub("T.*","", date);
data[date][NR]=$0;
sum[date]+=$4;
num[date]++
}
}
END{
for(date in data){
file=date".txt";
gsub("-","",file);
print head > file;
for(line in data[date]){
print data[date][line] > file
}
printf "%s|~^%s|~^%s|~^%s\n", foot, num[date],
foot4, sum[date] > file
}
}' file Code is working Brilliantly. But in the step
sum[date]+=$4;It is unable to sum large numbers. since I'm using %s at the last step, Trailer sum is getting printed with exponential value.
printf "%s|~^%s|~^%s|~^%s\n", foot, num[date],
foot4, sum[date] > fileHere, I just wanted to apply sum the on large numbers and print the exact sum. (I tried bc(bash calculator) here but got stuck since this sum is based out of the array and also it's getting added based on the particular date).Please help me with this
Also, I tried "%.15g" at the trailer step
printf "%s|~^%s|~^%s|~^%.15g\n", foot, num[date],
foot4, sum[date] > fileIn this, I'm able to get the exact sum if the result is having 15 digits(including the decimal). If the sum result is exceeding 15 digits this isn't working. Kindly help
| Sum of large numbers and print the result with all decimal points for the stated question when using awk arrays |
If field numbers are constant - as in your question fields 3 and 5 - try
awk '
function CHX(FLD) {n = split ($FLD, T, ".")
sub (T[n] "$", sprintf ("%X", T[n]), $FLD)
}
{CHX(3)
CHX(5)
}
1
' file
07:36:03.848461 IP 172.17.3.41.814D > 172.17.3.43.4400 UDP, length 44
07:36:03.848463 IP 172.17.3.42.814D > 172.17.3.43.4401 UDP, length 44
07:36:03.848467 IP SYSTEM-A.814D > 172.17.3.43.440A UDP, length 45
07:36:03.848467 IP SYSTEM-B.814D > 172.17.3.43.440B UDP, length 45For e.g. a trailing colon in field 5:
awk '
function CHX(FLD) {n = split ($FLD, T, "[^0-9]")
TRM = ""
if (!T[n]) {n--
TRM = substr ($FLD, length($FLD))
}
sub (T[n] TRM "$", sprintf ("%X%s", T[n], TRM), $FLD)
}
{CHX(3)
CHX(5)
}
1
' file |
I have the following tcpdump stream:
Current:
07:36:03.848461 IP 172.17.3.41.33101 > 172.17.3.43.17408: UDP, length 44
07:36:03.848463 IP 172.17.3.42.33101 > 172.17.3.43.17409: UDP, length 44
07:36:03.848467 IP SYSTEM-A.33101 > 172.17.3.43.17418: UDP, length 45
07:36:03.848467 IP SYSTEM-B.33101 > 172.17.3.43.17419: UDP, length 45The port numbers are in decimal. How can I pipe it to sed or awk to modify the stream so its the same stream with the port numbers changed to hexademical:
Expected:
07:36:03.848461 IP 172.17.3.41.814d > 172.17.3.43.4400: UDP, length 44
07:36:03.848463 IP 172.17.3.42.814d > 172.17.3.43.4401: UDP, length 44
07:36:03.848467 IP SYSTEM-A.814d > 172.17.3.43.440a: UDP, length 45
07:36:03.848467 IP SYSTEM-B.814d > 172.17.3.43.440b: UDP, length 45If I have the port number, I use this to convert it into hexadecimal:
echo 33101 | sed -e 's/.*://' | xargs printf "%x\n"
814dI have been trying to solve this but no luck. How can I replace the port numbers after the last occurrence of '.' in the third and fifth column of the stream and then change it to hexadecimal on the fly?
| sed/awk: replace numbers in a line after last occurance of '.' |
using Miller (https://github.com/johnkerl/miller) and running
mlr --n2c put 'for (key, value in $*) {
if ((value % 2 ==0) && (NR % 2 ==0)) {
$even +=value;
} elif ((value % 2 !=0) && (NR % 2 !=0)) {
$odd +=value;
}
}
' then unsparsify then stats1 -a sum -f odd,even input.csvyou will have
odd_sum,even_sum
253,82 |
I would like to write a shell script program that gives the sum of even and odd numbers in a given file's odd and even lines.
I would like to use:
sed -n 2~2pand
sed -n 1~2pbut I am not even sure where and how to start solving it.
Could you please guide me in the right direction?
Input file example:
20 15 14 17
20 50 79 77
55 40 89 77
45 65 87 12Output example:
Odd summ: 15+17+55+89+77=253(Enough just the end of the summ)
Even summ: 20+50+12=82(Enough just the end of the summ) | Sum of even and odd numbers of odd and even lines |
You haven't defined "sort correctly" anywhere, so I'm going to assume that you want to group by the first column and order by ascending numerical value of the second, with duplicate values removed. This solution isn't what you've actually asked for, but it seems to be what you want.
sort -k1,1 -k2,2n -u datafile
female 4
female 13
male 1
male 9
male 11
male 14If you really want the second column padded to have two digits you could use this
xargs printf "%s %02d\n" <datafile
male 09
male 11
male 09
male 01
female 04
female 13
male 14 |
I have the following file, beginning with
male 9
male 11
male 9
male 1
female 4
female 13
male 14If I use
sort -u -k1,1 -k2,2nthis returns
female 13
female 4
male 1
male 11
male 14
male 9
male 9How can I make the single-digit numbers show as 01, 02, etc. so they will sort correctly?
Update:
The commenter who told me to just move the -u to the back was correct.
sort -k1,1 -k2,2n -uworked perfectly, thanks!
| Replace single field numbers with double field numbers (1->01) |
The numbers you use seems to be very big for bash. You can try something like:
#!/bin/bash
SIZE=$(redis-cli info | awk -F':' '$1=="used_memory" {print int($2/1000)}')
MAX=19000000
if [ "$SIZE" -gt "$MAX" ]; then
echo 123
fi |
Trying to use this
#!/bin/bashSIZE=$(redis-cli info | grep used_memory: | awk -F':' '{print $2}')
MAX=19000000000if [ "$SIZE" -gt "$MAX" ]; then
echo 123
fiBut always getting: "Ganzzahliger Ausdruck erwartet"
When I echo SIZE I get a value like 2384934 - I dont have to / can convert a value or can / do I?
OUTPUT of redis-cli info:
# Memory
used_memory:812136
used_memory_human:793.10K
used_memory_rss:6893568
used_memory_rss_human:6.57M
used_memory_peak:329911472
used_memory_peak_human:314.63M
total_system_memory:16760336384
total_system_memory_human:15.61G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:8.49
mem_allocator:jemalloc-3.6.0EDIT: I found the mistake - I used print in the awk command - without it it works.
SIZE=$(redis-cli info | grep used_memory: | awk -F':' '{$2}') | Error when compare big numbers |
You can use awk for this:
$ awk 'length() == 11 { $0 = "0" $0 } 1' < input
082544990078
082544990757
899188001738
9337402002723
9337402002686
9337402002747
812153010733
852271005003
089000118359 |
I extracting a column from a file with different values, some of them are 11 character to 13, but whenever the value is 11 I need to add a 0 in front.
awk -F, '{print $1 }' $FILE | \
awk '{printf("%04d%s\n", NR, $0)}' | \
awk '{printf("%-12s\n", $0) }'82544990078
82544990757
899188001738
9337402002723
9337402002686
9337402002747
812153010733
852271005003
89000118359It should look like this:
082544990078
082544990757
899188001738
9337402002723
9337402002686
9337402002747
812153010733
852271005003
089000118359 | Add 0 when ever the value is 12 character |
1 awk is enough for all your treatment
%s convert your number to a string basically, use another format converter like %f for float in this case
awk -F ',' '{printf("%3.2f\n", $6}' ${FILE} > ${TEMP}/unit_price |
Trying to re-format a column, but I need to add decimals to the price and need to be align to the right. It also needs leading white spaces at end in order to work with other column.
awk -F, '{print $6}' $FILE | awk '{printf("%-7s\n", $0) }' > $TEMP/unit_pricecurrent output:
99
121.5
108
67.5This is how I need it to look like and align to the right:
99.00
63.00
121.50
108.00
108.00
67.50
67.50 | Add the decimals and align to the right |
Try this using modern bash (don't use backticks or expr) :
if ((var1 > 800)); then
overtime=$((var1 - 800)) # variable assignation with the arithmetic
echo "Overtime: $overtime"
fiOr simply :
if ((var1 > 800)); then
overtime="Overtime: $((var1 - 800))" # concatenation of string + arithmetic
fiCheck bash arithmetic
| I am working on a project to calculate my overtime at work with a shell script. I have two inputs and want to find if my number is over 800 = 8 hours;
if it is bigger, then it has to print out the result to my text file.
It has to print out my difference.
if [ $var1 -gt 800 ]; then
`expr $var1-800`
echo Overtime: "" >> $path and then I'm lost because I don't how to print out the result of my calculation.
| Echo calculation to text file [duplicate] |
The trick is that, on input, awk does not automatically interpret hex numbers. You have to ask it to do that explicitly using the strtonum function. Thus, when you need the number in your code, replace $3 with strtonum($3).
Example
Let's take this as the test file:
$ cat file
0x7f7488c4e6d7: R 0x7f7488b169ce
0x7f7488c4e6e9: R 0x7f7488b169cc
0x7f7488c4e6f8: R 0x7f7488b169d0Let's use this as the script:
$ cat a.awk
#!/usr/bin/awk -f
NR==1 {
prev=strtonum($3)
next
}
{
dif=prev - strtonum($3)
printf "%x\n",dif
print $3, dif > "diff"
}The screen output looks like:
$ ./a.awk file
2
fffffffffffffffeThe output file is:
$ cat diff
0x7f7488b169cc 2
0x7f7488b169d0 -2 | i want to read the values in the third column and find the difference from the other value in the same column.
i tried this
#!/usr/bin/awk -f
NR==1 {prev=$3;next; }
dif=prev - $3;
{printf "%x",dif}
{print $3, dif > "diff"}But since the values are hexadecimal im getting a zero as the difference.
| read file containing Hex values and process it |
Is there a specific reason for which you used this particular algorithm?
I'd rather construct the binary in shell variables than in a file. In such case you can strip the leading zeroes by adding a zero to a number, such as
expr 00001111 + 0
1111Also, if you have to use a file, I would suggest using /tmp instead of ~/Documents to hold temporary files. And finally, if I were you, I would prefer to construct the binary using a divide method, which naturally ends when the conversion is completed, thus avoiding the problem of leading zeroes rather than solving it.
|
So I made a decimal to binary converter but currently it doesn't chop off the zeros in the beginning. If I entered 64 for $1 it would start with 13 zeros which is rather unsightly but I don't know how to chop them off. Any help?
#!/bin/bashcat /dev/null > ~/Documents/.tobinary
touch ~/Documents/.tobinarytoBin=$1
counter=0
numZeros=0
first1=0
kill=0
echo $toBinfor v in {19..0}
do
let temp=2**$v
let test=$toBin-$temp if [ $test -ge 0 ]
then
if [ $first1 -eq 0 ]
then
kill=$numZeros
let first1++
fi
if [ $test -gt 0 ]
then
echo -n 1 >> ~/Documents/.tobinary
toBin=$test
elif [ $test -eq 0 ]
then
echo -n 1 >> ~/Documents/.tobinary
while [ $counter -lt $v ]
do
echo -n 0 >> ~/Documents/.tobinary
let counter++
done
break
fi
elif [ $test -lt 0 ]
then
echo -n 0 >> ~/Documents/.tobinary
let numZeros++
fi
donecat ~/Documents/.tobinary | How can I remove x number of zeros from the beginning of a file? |
factor {2..1000} | awk 'NF==2{print $2}' | How can I make a POSIX script that will print all the prime numbers from 1 to 1000?
| Simple bash script to print prime numbers from 1 to 1000 [closed] |
Assuming the data is in the file called file and that it is sorted on the first column, the GNU datamash utility could do this in one go on the data file alone:
datamash -H -W -g 1 max 2-7 <fileThis instructs the utility to use whitespace separated columns (-W; remove this if your column are truly tab-delimited), that the first line of the data contains headers (-H), to group by he first column (-g 1), and to calculate the maximum values for the 2nd through to the 7th columns.
The result, given the data in the question:
GroupBy(Info_region) max(Lig_score) max(Lig_prevista) max(Lig_prevista_+1) max(Int_score) max(Expo_score) max(Protac_score)
BARD1_region_005 0 3 3 0 1 1
BARD1_region_006 0 1 1 0 1 1
BIRC2_region_001 1 12 12 0 1 2
BIRC2_region_002 1 0 0 0 1 1
BIRC2_region_003 0 0 0 0 1 1
BIRC2_region_004 0 1 1 0 1 1
UHRF1_region_004 1 0 1 1 1 2
UHRF1_region_005 1 3 3 1 1 2You could also use --header-in in place of -H to get header-less output, and then take the header from the original data file:
{ head -n 1 file; datamash --header-in -W -g 1 max 2-7 <file; } >outputHere, I'm also writing the result to some new output file called output.Using awk and assuming tab-delimited fields:
awk -F '\t' '
BEGIN { OFS = FS }
NR == 1 { print; next }
{
n[$1] = 1
for (i = 2; i <= NF; ++i)
a[$1,i] = (a[$1,i] == "" || $i > a[$1,i] ? $i : a[$1,i])
}
END {
nf = NF
for (j in n) {
$0 = j
for (i = 2; i <= nf; ++i)
$i = a[$1,i]
print
}
}' fileThis calculates the maximum value in each column for each group. These numbers are stored in the a array while the n array just holds the group names as keys.
| I need to extract some values from a file in bash, on my CentOS system. In myfile.txt I have a list of objects called Info_region in which each object is identified with a code (eg. BARD1_region_005 or BIRC2_region_002 etc.) Moreover there are some others columns in which are reported some numerical variable. The same object (same code name) can be repeated several times in my file.
I also have a file that contains a completed list with all object codes without duplicates.
I would like to obtain an output.txt file in which each object (code name) is reported only once as in my list-file.txt and I would like to associate to this the maximum possible values associated with that code name in myfile.txt.
myfile.txt: (columns are separated by tab)
Info_region Lig_score Lig_prevista Lig_prevista_+1 Int_score Expo_score Protac_score
BARD1_region_005 0 3 3 0 1 1
BARD1_region_006 0 1 1 0 1 1
BIRC2_region_001 1 6 7 0 1 2
BIRC2_region_001 1 7 8 0 1 2
BIRC2_region_001 0 2 2 0 0 0
BIRC2_region_001 0 12 12 0 1 1
BIRC2_region_001 1 10 11 -1 1 1
BIRC2_region_001 1 2 3 0 1 2
BIRC2_region_001 1 0 1 0 1 2
BIRC2_region_001 1 6 7 0 1 2
BIRC2_region_002 0 0 0 0 1 1
BIRC2_region_002 1 0 0 -1 0.5 0.5
BIRC2_region_003 0 0 0 0 1 1
BIRC2_region_004 0 1 1 0 1 1
UHRF1_region_004 0 0 0 1 1 2
UHRF1_region_004 0 0 0 1 1 2
UHRF1_region_004 1 0 1 0 0.5 1.5
UHRF1_region_004 0 0 0 1 1 2
UHRF1_region_005 0 3 3 1 1 2
UHRF1_region_005 1 0 0 -1 1 1file-list.txt:
Info_region
BARD1_region_005
BARD1_region_006
BIRC2_region_001
BIRC2_region_002
BIRC2_region_003
BIRC2_region_004
UHRF1_region_004
UHRF1_region_005output.txt:
Info_region Lig_score Lig_prevista Lig_prevista_+1 Int_score Expo_score Protac_score
BARD1_region_005 0 3 3 0 1 1
BARD1_region_006 0 1 1 0 1 1
BIRC2_region_001 1 12 12 0 1 2
BIRC2_region_002 1 0 0 0 1 1
BIRC2_region_003 0 0 0 0 1 1
BIRC2_region_004 0 1 1 0 1 1
UHRF1_region_004 1 0 1 1 1 2
UHRF1_region_005 1 3 3 1 1 2Could someone help me please? Thank you!
| Extract maximum values for each objects from a file [duplicate] |
set -- $(cat /proc/loadavg)
load=${2/./}
if [ "$load" -ge 500 ]; then
echo alert
fiGet the load average from /proc, and set the positional parameters based on those fields. Grab the 2nd field and strip out the period. If that (now numeric) value is greater than or equal to 500, alert. This assumes (the current) behavior where the load averages are presented to two decimal points. Thanks to Arrow for pointing out a better way.
|
I'm trying to build a script to trigger an action/alert for a linux appliance when load average reaches a specific threshold.
Script looks like this:
!/bin/bashload=`echo $(cat /proc/loadavg | awk '{print $2}')`
if [ "$load" -gt 5 ]; then
echo "foo alert!"
fiecho "System Load $(cat /proc/loadavg)"Credit to helloacm.com for getting me started here.
When I run it, I get an error:
./foocheck.sh: line 4: [: 0.03: integer expression expectedWhich makes sense -- it's seeing the period/decimal and thinking that I'm comparing a string to an integer.
Most solutions I've found for this involve bc -l which isn't available on this appliance. I need to find a way to compare these values without using bc. Any ideas?
| Error comparing decimal point integers in bash script |
I think this is a possibility, simplifying:
for i in {1..11}; do
n_month=$(($i + 1))
[[ $n_month =~ ^[0-9]$ ]] && n_month="0$n_month"
echo "$n_month"
doneOutput02
03
04
05
06
07
08
09
10
11
12 |
In a shell script, I'm processing, some addition process will print an output. If it is a single-digit one, then it has to add zero as a prefix.
Here is my current script:
c_year=2020
for i in {01..11}
do
n_year=2020
echo 'Next Year:'$n_year
if [[ $i == 11 ]]
then n_month=12
echo 'Next Month:'$n_month
else
n_month=$(($i + 1))
echo 'Next Month:'$n_month
fi
echo $date" : Processing data '$c_year-$i-01 00:00:00' and '$n_year-$n_month-01 00:00:00'"
doneThe i value is in doule digit, but the n_month is still printing single digit. How do I set default shell output should return as double digit?
Or any alternate way to solve this?
| shell script set the default numbers as double digit zero as prefix |
Using awk:
awk -F[,.] '{print $1","$2","substr($4,1,3)","substr($6,1,3)}' fileWhere -F used to set the FS values to comma , and dot .
substr will print the 3 digits required after the dot.
|
I have a file whose contents look like this.
2,0,-1.8433679676403103,0.001474487996447893
3,1,0.873903837905657,0.6927701848899038
1,1,-1.700947426133768,1.5546514434152598CSV with four columns whose third and last columns are floats.
I want to get rid of the whole part of the numbers (including the sign) and keep only the three first digits of the decimal part so that the above sample would become
2,0,843,001
3,1,873,692
1,1,700,554How can I do this?
| Keep only a few digits of decimal part |
A bash-style solution using tr, sort and head
echo $(< file) | tr ' ' '\n' | sort -rn | head -1echo $(< file) reads the file and echoes the result. But as we don't have quotes around it (echo "$(< file)" does the same as cat file) the content is one long string with newlines removed and whitespace squeezed
| tr ' ' '\n' pipes the result through tr replacing each space character with a newline
| sort -rn pipes the result to sort, sorting all lines in reversed (-r) numeric (-n) order (largest number first)
| head -1 pipes the result to head to output the first line |
Can someone show me a way to find the largest positive number using awk,sed or some sort of unix shell program? I'm using Linux RHEL with KSH. The input may have 8 columns worth of data with N/A as one of the possible values.
SAMPLE INPUT1252.2 1251.8 N/A N/A
-31.9 -33.2 N/A N/A
-1172.4 -1174.4 N/A N/A
-6.5 -6.4 N/A N/A
-.3 -.3 N/A N/A
1351.8 1351.8 N/A N/A
38.3 38.0 N/A N/A
-21.6 -21.9 N/A N/A
-4.7 -4.5 N/A N/A
-5.0 -2.9 N/A N/A
3.1 3.3 N/A N/A
-20.1 -20.3 N/A N/A
-199.1 -199.3 N/A N/A
346.5 346.7 N/A N/A
-.8 -.4 N/A N/A
14.8 14.7 N/A N/A
8.4 8.4 N/A N/A
-18.2 -18.2 N/A N/A
-43.7 -43.6 N/A N/A Desired output
Largest number is 1351.8 | Find the largest positive number in my given data? |
Exclude the first row:
awk -F',' 'NR > 1 {sum+=$3} END {print sum / (NR - 1)}' cpu.csv |
I have a csv file with many columns. I have computed average of a column using the command
awk -F',' '{sum+=$3} END {print sum/NR}' cpu.csvBut it does include the first row that has text fields like Serial number, Name, value etc. I want to exclude this first row while doing the averaging.
Any ideas on how to achieve it ?
| Finding average excluding the first row |