output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
It's a little hard to see what you are trying to do, but are you by any chance looking for the $spoolfile variable/configuration setting in a global context?
I'm not sure how it interacts with Mutt's IMAP support, but it allows you to set the folder which will be opened by default when Mutt is started.
It looks like you set it in the account folder-hooks, but you'd need to set it outside of those in order for it to apply before the folder-hook folder is entered.
Try adding the following to the end of your ~/.muttrc, and see if it helps:
set spoolfile="imaps://[emailprotected]@imap.gmail.com:993/INBOX" |
My first question on this site, I come quickly.
I'm a fan of command line tools and text-based application. I use tmux with a minimalist tiling wm is qtile, I can not change the environment. I'm a developer, I mainly use Python and Perl.
My first question is about mutt a great client. I use the sidebar to be able to display mailboxes. I used imap with google accounts, Here is my configuration:
account-hook . 'unset preconnect imap_user imap_authenticators'#First account
account-hook 'imaps://[emailprotected]@imap.gmail.com:993/' \
' set imap_user = "[emailprotected]" \
imap_pass = "password" 'folder-hook 'imaps://[emailprotected]@imap.gmail.com:993/INBOX' \
' set imap_user = "[emailprotected]" \
imap_pass = "1password" \
smtp_url = "smtp://[emailprotected]@smtp.gmail.com:587/" \
smtp_pass = "password" \
from = "[emailprotected]" \
realname = "Natal Ngétal" \
folder = "imaps://[emailprotected]@imap.gmail.com:993" \
spoolfile = "+INBOX" \
postponed="+[Gmail]/Drafts" \
mail_check=60 \
imap_keepalive=300 '#Second account
account-hook 'imaps://[emailprotected]@imap.gmail.com:993/' \
' set imap_user = "[emailprotected]" \
imap_pass = "password" 'folder-hook 'imaps://[emailprotected]@imap.gmail.com:993/INBOX' \
' set imap_user = "[emailprotected]" \
imap_pass = "password" \
smtp_url = "smtp://[emailprotected]@smtp.gmail.com:587/" \
smtp_pass = "password" \
from = "[emailprotected]" \
realname = "Natal Ngétal" \
folder = "imaps://[emailprotected]@imap.gmail.com:993" \
spoolfile = "+INBOX" \
postponed="+[Gmail]/Drafts" \
mail_check=60 \
imap_keepalive=300 'mailboxes + 'imaps://[emailprotected]@imap.gmail.com:993/INBOX' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/INBOX' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/[Gmail]/Messages envoyés' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/[Gmail]/Messages envoyés' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/[Gmail]/Spam' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/[Gmail]/Spam' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/Divers' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/Divers' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/[Gmail]/Tous les messages' \
+ 'imaps://[emailprotected]@imap.gmail.com:993/[Gmail]/Tous les messages'# Where to put the stuff
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = "~/.mutt/certificates"set mail_check = 30
set move = no
set imap_keepalive = 900
set editor = "vim"set date_format = "%D %R"
set index_format = "[%Z] %D %-20.20F %s"
set sort = threads # like gmail
set sort_aux = reverse-last-date-received # like gmail
set uncollapse_jump # don't collapse on an unread message
set sort_re # thread based on regex
set reply_regexp = "^(([Rr][Ee]?(\[[0-9]+\])?: *)?(\[[^]]+\] *)?)*"bind index gg first-entry
bind index G last-entrybind index R group-reply
bind index <tab> sync-mailbox
bind index <space> collapse-thread# Ctrl-R to mark all as read
macro index \Cr "T~U<enter><tag-prefix><clear-flag>N<untag-pattern>.<enter>" "mar# Saner copy/move dialogs
macro index C "<copy-message>?<toggle-mailboxes>" "copy a message to a mailbox"
macro index M "<save-message>?<toggle-mailboxes>" "move a message to a mailbox"bind index \CP sidebar-prev
bind index \CN sidebar-next
bind index \CO sidebar-open
bind pager \CP sidebar-prev
bind pager \CN sidebar-next
bind pager \CO sidebar-openset pager_index_lines = 10 # number of index lines to show
set pager_context = 3 # number of context lines to show
set pager_stop # don't go to next message automatically
set menu_scroll # scroll in menus
set tilde # show tildes like in vim
unset markers # no ugly plus signsbind pager k previous-line
bind pager j next-line
bind pager gg top
bind pager G bottombind pager R group-replyset quote_regexp = "^( {0,4}[>|:#%]| {0,4}[a-z0-9]+[>|]+)+"
auto_view text/html # view html automatically
alternative_order text/plain text/enriched text/htmlset sidebar_delim = '│'
set sidebar_visible = yes
set sidebar_width = 24set status_chars = " *%A"
set status_format = "───[ Folder: %f ]───[%r%m messages%?n? (%n new)?%?d? (%d to delete)?%?t? (%t tagged)? ]───%>─%?p?( %p postponeset beep_new # bell on new mails
unset mark_old # read/new is good enough for mecolor normal white black
color attachment brightyellow black
color hdrdefault cyan black
color indicator black cyan
color markers brightred black
color quoted green black
color signature cyan black
color status brightgreen blue
color tilde blue black
color tree red blackcolor index red black ~D
color index magenta black ~Tset signature="~/.signature"So it works well, I can see both my inbox and when there are new posts in it. But when I open mutt it first opened a local box, I do not understand why, and to see the new posts in other inbox, I have to move first in each of they. Maybe it's normal, but how by asking mutt to open domain.com for example first and not a local one that does not exist.
| Mutt imap multiple account |
This seems to be resolved in OfflineIMAP 6.6.1.
|
I have offlineimap running on a cron job with */10 * * * * offlineimap -q -u quiet. Every once in a while it seems to get interrupted, and when that happens I can't restart it. If I try to run it from the terminal I get an error indicating that it is locked:
OfflineIMAP 6.5.5
Licensed under the GNU GPL v2+ (v2 or any later version)
Account sync Example:
*** Processing account Example
ERROR: Could not lock account Example. Is another instance using this account?
*** Finished account 'Example' in 0:00
ERROR: Exceptions occurred during the run!
ERROR: Could not lock account Example. Is another instance using this account?Traceback:
File "/usr/lib/python2.7/dist-packages/offlineimap/accounts.py", line 240, in syncrunner
self.lock()
File "/usr/lib/python2.7/dist-packages/offlineimap/accounts.py", line 207, in lock
OfflineImapError.ERROR.REPO)Is there any way to break the lock or force offlineimap to quit all the way?
| How do I restart offlineimap? |
I figured this out. I had a change in my oauth2.py that printed the json response instead of just the access_token. mutt was passing the base64-encoded json as the bearer token which is not correct. Thanks to @jakub-jindra for pointing me toward the --quiet option.
I figured this out by running mutt -d 5 which shows the base64-encoded payload that it passes to GMail:
[2020-08-28 10:00:54] Authenticating (OAUTHBEARER)...
[2020-08-28 10:00:54] 7> a0001 AUTHENTICATE OAUTHBEARER XXXXXXXXXX$XXXXInfQEB
[2020-08-28 10:00:55] 7< + XXXXXXXXXX
[2020-08-28 10:00:55] 7> [2020-08-28 10:00:55] OAUTHBEARER authentication failed.I base-64 decoded that and got:
n,[emailprotected],host=imap.gmail.comport=993auth=Bearer {u'access_token': u'ya29.a0XXXXXX', u'scope': u'https://mail.google.com/', u'expires_in': 3599, u'token_type': u'Bearer'}Hope this helps someone!
|
I've used OAUTHBEARER authentication to use mutt with GMail for a while, but I've run into a an issue that I can't figure out. When I launch mutt I get OAUTHBEARER authentication failed. Here is .muttdebug0:
[2020-08-27 10:38:59] TLSv1.3 connection using TLSv1.3 (TLS_AES_256_GCM_SHA384)
[2020-08-27 10:39:00] Connected to imap.gmail.com:993 on fd=7
[2020-08-27 10:39:00] imap_cmd_step: grew buffer to 512 bytes
[2020-08-27 10:39:00] 7< * OK Gimap ready for requests from XX.XX.XX.XX r65mb112981108pjg
[2020-08-27 10:39:00] IMAP queue drained
[2020-08-27 10:39:00] 7> a0000 CAPABILITY
[2020-08-27 10:39:00] 7< * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 XYZZY SASL-IR AUTH=XOAUTH2 AUTH=PLAIN AUTH=PLAIN-CLIENTTOKEN AUTH=OAUTHBEARER AUTH=XOAUTH
[2020-08-27 10:39:00] Handling CAPABILITY
[2020-08-27 10:39:00] 7< a0000 OK Thats all she wrote! r65mb112981108pjg
[2020-08-27 10:39:00] IMAP queue drained
[2020-08-27 10:39:00] imap_authenticate: Trying method oauthbearer
[2020-08-27 10:39:00] Authenticating (OAUTHBEARER)...
[2020-08-27 10:39:00] 7> a0001 AUTHENTICATE OAUTHBEARER bixhPWN3YWxrYXRyb25AZ21haWwuXXXXX=
[2020-08-27 10:39:01] 7< + eyJXXXXXXX29nbGUuY29tLyJ9
[2020-08-27 10:39:01] 7> [2020-08-27 10:39:01] OAUTHBEARER authentication failed.I have a project set up in the Google Dev console and a client_id and client_secret. I get a warning that I have to specifically ignore about my Google Developer app being unverified that I have to explicitly allow. I don't remember this being an issue in the past. I can successfully log in using:
oauth2.py [emailprotected] --client_id=56843257498 --client_secret=fjdksla --generate_oauth2_tokenThis is in my .muttrc:
set imap_oauth_refresh_command="~me/bin/oauth2.py \
--user [emailprotected] \
--client_id=60080XXX.apps.googleusercontent.com \
--client_secret=AZXXXX \
--refresh_token=1//XXXXAYSNwF"
set smtp_oauth_refresh_command="~me/bin/oauth2.py \
[emailprotected] \
--client_id=60080XXX.apps.googleusercontent.com \
--client_secret=AZXXXX \
--refresh_token=1//XXXXAYSNwF"The above commands run fine in a shell. Also, a test with oauth2.py succeeds as well (though this tests XOAUTH2 and not OAUTHBEARER).
10:30.51 > HNNF1 AUTHENTICATE XOAUTH2
10:30.54 < +
10:30.54 write literal size 280
10:31.24 < * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH UTF8=ACCEPT LIST-EXTENDED LIST-STATUS LITERAL- SPECIAL-USE APPENDLIMIT=35651584
10:31.24 < HNNF1 OK [emailprotected] authenticated (Success)
10:31.24 > HNNF2 SELECT INBOX
10:31.50 < * FLAGS (\Answered \Flagged \Draft \Deleted \Seen $Forwarded $Junk $NotJunk $NotPhishing $Phishing)
10:31.50 < * OK [PERMANENTFLAGS (\Answered \Flagged \Draft \Deleted \Seen $Forwarded $Junk $NotJunk $NotPhishing $Phishing \*)] Flags permitted.
10:31.50 < * OK [UIDVALIDITY 2] UIDs valid.
10:31.50 < * 1060 EXISTS
10:31.50 < * 0 RECENT
10:31.50 < * OK [UIDNEXT 51915] Predicted next UID.
10:31.50 < * OK [HIGHESTMODSEQ 5748372]
10:31.50 < HNNF2 OK [READ-WRITE] INBOX selected. (Success)I have this all set up and working for my work email, but not for my personal. I feel like I'm just missing something simple. Can anyone help me see my mistake?
| OAUTHBEARER authentication fails in mutt |
Using formail (shipped with procmail):
find ~/users/example -type f -exec sh -c '
for email do
formail -x to -x cc < "$email" |
grep -qF [emailprotected] &&
formail -cx subject < "$email"
done' sh {} +That is, for each email fileextract the To and Cc headers
search for [emailprotected] in there
if found, extract the Subject header and print it on one line (with -c). |
I want to find all emails in my IMAP folders that contain a certain text (namely: that are sent to a certain email address). I already found out I can do so by using grep like so:
grep -rnw '~/users/example' -e "[emailprotected]"This returns either the matched text or the name of the file (with -l). But what I really need for my task is to know the "Title" of the email file that was found.
/home/example/users/example/.Archives.2013/cur/1364614080.4080.example.com:2,Sa
/home/example/users/example/.Archives.2013/cur/1385591317.91317.example.com:2,RSa
/home/example/users/example/.Archives.2013/cur/1358235054.35054.example.com:2,S
/home/example/users/example/.Archives.2013/cur/1358445545.45545.example.com:2,S
/home/example/users/example/.Archives.2013/cur/1453119248.M330746P8611.example.com,S=6761,W=6915:2,SSo, I somehow need to find the files based on the grep above, but the result that is listed should be a different part of that same file (maybe with regex?).
How can I go about this?
| Find files containing text, but report different part of it as result |
It's all there in the account configuration. You can even decided to synchronize just the recent messages (you specify what recent is).
This is the screen:Taken from: https://support.mozilla.org/en-US/kb/imap-synchronization, where you will find the rest of the options and even a list of synchronization benefits and other info.
|
After reinstalling the email client Thunderbird and configuring my email account using IMAP, Thunderbird starts to download all old messages. Judging from the time it takes it seems like entire messages are downloaded instead of only the message headers. Right now the status bar says[emailprotected]: Downloading message 7426 of 11927 in All mail...How can I prevent this? I only want the email headers to be downloaded.
| Prevent Thunderbird from downloading old messages |
I have used multiple domain certificates, but only in cases where the names are variant domains the same organization. Ownership of the names trace back to the same corporation. If I were a certificate authority, I would not want to provide the same service for unrelated organizations.
For SMTP, it is not unusual for the domain of the mail server to be different than the domain which is originating mail. Dovecot supports domain logins, so that you have have users log in with ids like [emailprotected] while using a single domain for the Dovecot server. You would still just need certificate signed by a recognized certificate authority.
You could also do as I do, and publish the public key of your signing certificate. Your clients could then import the key, and the certificate will pass if the domain matches. I haven't tried adding alternate names to a self-signed certificate, but it appears that openssl will generate such certificates fairly easily.
|
I have a mail server I'm trying to setup to service 6 domains. I will have multiple mail users accessing the server via POP3S or IMAPS.
Currently in MS Outlook, they are receiving an unverified SSL cert warning because I'm using a self-signed SSL certificate. I would like to purchase a SSL cert which would stop this warning from appearing.
I want each domain group to use something like mail.theirdomain.com as the incoming mail server name. Is there a way to do this without having to purchase a cert for each of the 6 domains? In other words, can I buy one for the mail.maindomain.com and have it authenticate the other DNS names?
Currently in DNS, the mail.theirdomain.com has a CNAME setup to mail.maindomain.com. I'm running Dovecot for incoming and Sendmail for outgoing. Latest version on Debian 6.
| Dovecot POP3S & IMAPS SSL Certificate that works for all user domains |
Unfortunately at present there's no strict standard on IMAP folder/flag usage, and no easy way to unify behaviors of all IMAP clients.
Most clients, including Outlook and many mobile device apps, are incapable of changing behavior other than its default settings. The best bet might be to configure most flexible one (Thunderbird in many cases) to conform to others.
Related pages I found by random search:http://kb.mozillazine.org/IMAP_Trash_folder
https://serverfault.com/questions/464384/imap-standard-folder-names-junk-or-spam
https://superuser.com/questions/203605/outlook-and-imap-outlook-doesnt-allow-the-drafts-and-trash-folders-to-sync-wi |
I've succesfully set up Postfix and Dovecot at my home server to use Maildir as the mailbox storage format. However after I tried out a couple of different mail clients like Thunderbird and Outlook and connected through Dovecot's IMAP server, I found out that they're all a bit inconsistent in the way they make use of the mailbox.
Two examples:To indicate forwarded messages, Thunderbird uses a Maildir label called "Forwarded", while Outlook just sets the R flag (which subsequently causes other clients to interpret that message as "Replied").
Each client uses its own "special" folders which can't be deleted. What's called "Trash" in one program might be called "Deleted Items" in another.Is there a logical consistent way to access my e-mails through multiple different client programs without them confusing each other (and me)?
Thank you!
| Mail clients' inconsistent use of Maildir through IMAP |
In the ? view, you can cursor to the folder you want to pick and press Space.
If you're in a place where you can just type the name (e.g. in the plain s view), you can just type the name of the folder preceded by =.
|
I have IMAP folders like this:
A
A/A
A/B
B
B/A
CWhen I save (s in mutt) a mail and use ?, I'm presented with a view like this:
A +
B +
CIt works perfectly until I want to move to A or B. If I click on A, it automatically expands A and presents me with its children (A and B).
Is there a way to do that? Or is there a way to write it without using the ? view?
| How to select an expandable folder when saving? |
I found the answer so I thought I should post it here too. (precisely @behrad-eslamifar did :)
you should append the CRL to the CA certificate given to dovecot EVEN IF YOU HAVE SET ssl_require_crl = no
like this way :
openssl ca -gencrl -out crlfile
cat crlfile >> cacert.pem
service dovecot restartthanks everyone for their suggestions. <3
|
I have configured dovecot to use Client certificate authentication. I have used CA.pl (openssl wrapper) to create CA cert and sign client and server certs with that.(no certificate chain, CA cert is trusted in client) I have set the certificate as CA certificate in dovecot for client auth. Dovecot correctly asks IceDove for certificates but after clicking OK, It fails with error
"Client didn't present valid SSL certificate."
Using openssl to manually test IMAP connection also results in this error.
dovecot config: https://gist.github.com/Xcess/71f7eeeda0a270b252f1de5d7308c0e2
I have tried certificate with CN=user1 and [emailprotected]. both failed. Also set common-name to be username in dovecot conf...no difference.
I don't know what to do as this is all stated in manuals and seems pretty simple and straightforward. But it fails.
thanksUpdate 1:
output of command openssl x509 -in certificate.crt -text -noout:Certificate:
Data:
Version: 3 (0x2)
Serial Number:
8e:3d:9b:7c:13:35:88:b7
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=IR, ST=Isf, O=Apps4you, CN=lnxsrv2
Validity
Not Before: Mar 1 10:45:32 2017 GMT
Not After : Mar 1 10:45:32 2018 GMT
Subject: C=AU, ST=Some-State, O=Internet Widgits Pty Ltd, CN=user1/[emailprotected]
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
---
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
Netscape Comment:
OpenSSL Generated Certificate
X509v3 Subject Key Identifier: ---
-----Output ommited-----Update 2:
this slightly modified config file also doesn't work:
https://gist.github.com/Xcess/599beaec17a4a524a2acbde1b7f5c70fUpdate 3:
Verbose SSL Log file :
https://gist.github.com/Xcess/f54850ecdaa6bcd044a77d133cb9b9c2
| Dovecot rejecting client certificate |
The problem started when I installed Homebrew's version of python rather than the Apple version. The error was resolved by running
brew uninstall pythonI discovered this was the solution by reading about a similar error produced by another Python program on OS X.
|
I am using offlineimap to fetch mail from several IMAP servers. This used to work but today offlineimap has been unable to fetch mail, producing the following errors:
*** Processing account example
Establishing connection to imap.gmail.com:993
ERROR: Unknown SSL protocol connecting to host 'imap.gmail.com' for
repository '<redacted>'. OpenSSL responded:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
*** Finished account 'example' in 0:00Relevant parts of my configuration are:
[Account example]
localrepository = local-example
remoterepository = remote-example[Repository local-example]
type = Maildir
localfolders = ~/mail/example[Repository remote-example]
maxconnections = 1
type = Gmail
remotehost = imap.gmail.com
remoteuser = [emailprotected]
remotepasseval = get_keychain_pass(account="[emailprotected]",
server="imap.gmail.com")
ssl = yes
sslcacertfile = /usr/local/etc/openssl/certs/dummycert.pemThe sslcacertfile configuration was created in response to this SO answer. The get_keychain_pass function is from this offlineimap configuation.
I am using offlineimap 6.5.7 built with Homebrew on OS X 10.10.4.
| Offlineimap unknown SSL protocol error |
In general you keep checking for e-mail unless, as mentioned by @JoelDavis, the server can be extended with some push command.
Further if your e-mail server supports it one can make use of the IDLE extension for IMAP4:https://www.rfc-editor.org/rfc/rfc2177
http://en.wikipedia.org/wiki/IMAP_IDLEThe IMAPClient has native support for IDLE:http://search.cpan.org/~djkernen/Mail-IMAPClient-2.2.9/IMAPClient.pod#idle |
I have a Perl script that uses NET::IMAPClient and MIME::Parser which simply reads new emails from an IMAP server and saves any attachments to disk.
My question is: what is the best way to run this script as soon as there's new email? I can use cron to periodically run the script, I could check every few seconds even, but I suspect there is a better way.
Ideally I would act upon the receiving of an email immediately, like a system event. What are common software and techniques to achieve this? I'm using a Debian system.
| Receiving emails over IMAP and parsing with a script with minimal delay |
It seems to be some kind of bug. Which version of cyrus imap you are using?
As a quick fix, I think following should work.
Using cyradm delete those rogue mailboxes.
You can find how to use cyradm here.
localhost> sam user.foo.INBOX.* cyrus d
localhost> dm user.foo.INBOX.* |
I am using cyrus imapd on Fedora
The mailbox of a user is presenting a deep tree of nested INBOXes:
user.foo
user.foo.Apple Mail To Do
user.foo.Archives
user.foo.Archives.2011
user.foo.Deleted Messages
user.foo.Drafts
user.foo.INBOX.Deleted Messages
user.foo.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.Deleted Messages
user.foo.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.INBOX.Deleted Messages
[...]If I delete the folder and reconstruct the mailbox the folders are re-created
/usr/lib/cyrus-imapd/reconstruct -r -f user.fooall other users are OK
Any hint on why this structure is created?
| cyrus impad and recursive 'INBOX' directories |
It can all be done with only curl.
Send email
MSG="From: [emailprotected]
To: Receiver
Subject: Test"echo "$MSG" | curl --url 'smtps://smtp.gmail.com:465' --ssl-reqd \
--mail-from "[emailprotected]" --mail-rcpt "[emailprotected]" \
--upload-file - --user "[emailprotected]:password" --insecureTo read mail, first you have to know how many emails are in the INBOX.
EXISTS="$(curl --insecure \
--user "[emailprotected]:password" \
--url 'imaps://imap.gmail.com:993/' \
--request "EXAMINE INBOX" | grep "EXISTS" | grep -oP '\d*' | head -n1 )"echo "$EXISTS"The newest email has the highest number. Print the subject of the last 5 emails, newest first.
for ((i=$EXISTS;i!=$EXISTS-5;i--)); do
SUBJECT="$(curl --insecure -u "[emailprotected]:password" \
--url "imaps://imap.gmail.com:993/INBOX;UID=$i" | \
grep "Subject: " | head -n1 )" echo "$SUBJECT"
doneNotice: To enable smtp/imap access like this, you must go into gmail settings disable secure settings.
|
Trying to setup a simple automation system has proved exceedingly complicated for me. Using scripts, I want to use email to perform remote tasks. The script outline:Send an empty body email with the header as "Pattern1 (number)"
Read the last 50 email headers.
Find latest header matching "Pattern2 (number)".
Save (echo) the number from that header into a file.
Wait 5 minutes
Loop to 1.I'm trying to using as few programs as possible. Currently, have Mutt for email sending and retrieval. And grepmail (with grepm script) for searching emails.
Am I way off on using Mutt and grepmail? I'm confused as a non-sysadmin. Mutt isn't great for scripting. What's the simplest way to set this up?
| Setup simple Automation system using Email |
I have solved the issue, after following this guide http://xmodulo.com/enable-user-authentication-postfix-smtp-server-sasl.html
It still didn't work, but turns out it was because my password had a '£' in which was causing the issue.
|
I have an email server with public dns entry that sends and receives fine.
I am trying to add it to outlook as imap account but it keeps failing. Server error shows
Oct 30 15:29:04 mail.example.local dovecot[17250]: imap-login: Disconnected (auth failed, 1 attempts in 4 secs): user=<[emailprotected]>, method=PLAIN, rip=12.123.456.789, lip=123.4.56.7, session=<yjdfyjfkugih>
Oct 30 15:29:08 mail.example.local auth[17491]: pam_unix(dovecot:auth): check pass; user unknown
cat /etc/pam.d/dovecot
#%PAM-1.0
auth required pam_nologin.so
auth include password-auth
account include password-auth
session include password-auth
auth required pam_unix.so nullok
account required pam_unix.socat auth-system.conf.ext
PAM authentication. Preferred nowadays by most systems.
PAM is typically used with either userdb passwd or userdb static.
REMEMBER: You'll need /etc/pam.d/dovecot file created for PAM
authentication to actually work. <doc/wiki/PasswordDatabase.PAM.txt>
passdb {
driver = pam
# [session=yes] [setcred=yes] [failure_show_msg=yes] [max_requests=<n>]
# [cache_key=<key>] [<service name>]
#args = dovecot
}
| imap-login: Disconnected |
Replace --tls1 --tls2
by --ssl1 --ssl2
|
I'm trying to sync my emails from an old server "server2" with a new one "server1"
imapsync \
--host1 imap.server1.com --user1 [emailprotected] --password1 fdsfdsfsfd \
--host2 imap.server2.com --user2 [emailprotected] --password2 fdsfdsfds \
--debugimap1 --debugimap2 \
--tls1 --tls2 --debugssl 4Nothing happens at all. It gets frozen. In thunderbird I use SSL/TLS and the port 993 and Normal Password. However, there're no such options in imapsync.
When I remove "--tls1 --tls2" it produces no output at all and finished in a couple of seconds.
Where is an issue?
| Unable to sync email using imapsync |
You can upload with python as well:
import imaplib# an alternative for IMAP4_SSL is IMAP4 if you're doing this locally
imap = imaplib.IMAP4_SSL(your_2nd_server, its_imap_portnumber)
imap.login(user_name, password)and then for each message you downloaded:
imap.append(mailbox, [], delivery_time, message)You have to get the delivery time out of the message header for this.
|
I want to download my emails via IMAP from email account and upload them to another. I want to do that manually in Python. I know how to retrieve my emails via IMAP, but how can I actually "upload" them to my another email account? Is there a standard way or does that depend on my 2nd email server?
| How to upload my emails to another email server I've downloaded via IMAP from my 1st one? |
You presumably have a typo, postifx should be postfix. Search the dovecot config files for postifx and fix those.
|
I'm running Ubuntu 14.10 installed postfix dovecot vimbadmin and roundcube.
Everything works except dovecot.
When I restart dovecot I get this:
stop: Unknown instance: dovecot start/running, proccess 6580In my var/log/dovecot.log file I get this:
Fatal: service(auth) Group doesn't exist: postifx (See service auth { unix_listener /var/run/dovecot/auth-userdb { group } } setting)So i probably have some problems with the settings, i will post those two if its necessary, but i'm hoping maybe someone all ready had this problem and its a quick fix.
| Dovecot (imap) not starting |
I found a shorter and simpler way to implement with Python.
Sample code below.
#!/usr/bin/pythonimport mailbox
import email.utils
import osmbox = mailbox.Maildir(os.environ['HOME'] + "/Maildir" )
mbox.lock()
spam = mbox.get_folder('INBOX.junk')
print "INBOX:"
for message in mbox:
print "- [%s] %s: \"%s\"" % ( message['date'], message['from'], message['subject'] )
print
print "SPAM messages:"
for message in spam:
print "- [%s] %s: \"%s\"" % ( message['date'], message['from'], message['subject'] )mbox.close() |
I know this is theoretically possible, but I'm trying to avoid reinventing the wheel.
I'm using Ubuntu Linux, with Maildir mailbox format. I want to put something in my .login that will parse the ~/Maildir contents and display a summary of my unread email messages.
I'm running Ubuntu 13.04 (Raring Ringtail), and I use mutt for my email client, and I'm using Postfix and dovecot for SMTP and MDA/IMAP.
Is there something already written that will do this? Or will I need to write something myself using Perl and Mail::Box::Maildir? It seems like this would be a common thing, but for the life of me I can't find a package or utility that will do what I'm looking for.
| How can I print a summary of Maildir contents when logging into a shell? |
It appears that Courier IMAP's architecture does not support Maildirs outside of $HOME.
|
I'm running into a weird issue setting up Postfix and Courier IMAP on a clean Ubuntu 13.04 install. I'm using this tutorial, and am currently Testing Courier IMAP.
When I try to login with root, everything's fine (this is part of a netcat connection):
a login root my-pass
a OK LOGIN Ok.
a logout
* BYE Courier-IMAP server shutting down
a OK LOGOUT completedHowever, when I try to login with my own account, I get an error:
a login camilstaps my-other-pass
* BYE [ALERT] Fatal error: No such file or directory: No such file or directoryThe mail.log:
Jun 2 13:47:37 cs imapd: Connection, ip=[::ffff:127.0.0.1] # this is the root login
Jun 2 13:47:51 cs imapd: LOGIN, user=root, ip=[::ffff:127.0.0.1], port=[54630], protocol=IMAP
Jun 2 13:48:11 cs imapd: LOGOUT, user=root, ip=[::ffff:127.0.0.1], headers=0, body=0, rcvd=9, sent=80, time=20
Jun 2 13:50:59 cs imapd: Connection, ip=[::ffff:127.0.0.1] # this is the other login
Jun 2 13:51:07 cs imapd: chdir Maildir: No such file or directory
Jun 2 13:51:07 cs imapd: camilstaps: No such file or directoryAnd the mail.err, not really adding something:
Jun 2 13:51:07 cs imapd: camilstaps: No such file or directoryI configured Postfix to use the Maildir format using /var/mail/%u where %u is the username. At first, I thought the camilstaps user missed his mail folder. However, it does have one similar to root's one:
root@cs:/# tree -CdA /var/mail
/var/mail
├── camilstaps
│ └── Maildir
│ ├── cur
│ ├── new
│ └── tmp
└── root
└── Maildir
├── cur
├── new
└── tmpThen I thought the camilstaps user had a different maildir in the MAIL constant, however...
root@cs:/# echo $MAIL
/var/mail/root
camilstaps@cs:/$ echo $MAIL
/var/mail/camilstapsWhat's going on here? How can I fix this?
For what it's worth, I'm on Ubuntu Server 13.04.With help from the comments, I found out something interesting / possibly useful:There was an old Maildir in /root. When I remove that one, I get the same error when logging in as root to the IMAP server.
When I add a Maildir directory to the homedir of the camilstaps user, I don't get the error anymore.For some reason, the IMAP server doesn't look in /var/mail/%u (%u = username) but in %h/Maildir (%h = homedir). The $MAIL variable has been set correctly, so what could be the problem here?
| Courier IMAP cannot find my Maildir but can find root's Maildir |
From what I can tell, there is no configuration file for uw-imapd. It is known for needing very little configuration.
But according to this link, you should be able to change some settings by modifying xinetd.d configs.
|
Where can I find configuration file for uw-imapd on debian? Is there even one?
| Where can I find configuration file for uw-imapd on debian? |
Make sure your dovecot configuration is done right.
(how to) Depending on a version check if you have imap protocol enabled in:
/etc/dovecot.conf
/etc/dovecot/dovecot.conf
OR
/etc/dovecot/conf.d/10-master.conf
Than check if dovecot is listening on 143
Than check it from outside, that the port is really opened.
Setup the client (how to)EDIT: Issue was solved, problem was on clients side (Bat email client)
|
I have installed mail server with postfix and dovecot.
My clients use The Bat email client software.
While sending mails, they use tcp port 587 with STARTTLS.
And I disabled tcp port 25 for sending mails in firewall (in server).
Now, they receive mails via tcp port 110 POP3 protocol.
I want to use IMAP or IMAPS for receiving mails for my clients.
And I totally want to disable POP3 protocol.
1) I tried to change port number to 143 for receiving mails in "The Bat", but it cannot receive mails. How can I do it right?
2) Should I reconfigure dovecot for using IMAP/S only?
3) Is my idea is right?
PS: I opened port 143 on my server in firewall.
| How to configure and force users to use IMAP? |
After further research, I've come to the conclusion that this isn't possible; it's a form of file giveaway (because the postfix user would need to create files owned by backup). Looks like the backup program will simply have to run with group access to the mailstore, and thus be depended on to not change anything.
|
Picture a mail store for virtual users. Arriving mail is managed by Postfix (creating new files); access to mail is provided by Courier IMAP (moving files around, deleting files, creating sent mail); and periodically, off-site backups are taken (reading files without changing anything). Obviously this will all work if every process runs as root; equally obviously, they shouldn't all run as root.
The current setup has all the files owned by the postfix user and group. The Postfix and Courier processes all run as that uid/gid. But I would like to have the backup process run as a dedicated read-only user.
Is there a way to tell Postfix and Courier to create their files as user backup:postfix with permissions 0464/0575? I'm on Debian GNU/Linux if that makes any difference.
| Postfix, Courier, and backups - appropriate file permissions? |
You probably have the default local_transport = local as the log says delivered to mailbox.
You need to tell postfix to use dovecot as the transport for local deliveries. you can do this as follows:
Edit your main.cf file and modify it to have the following lines:
local_transport = virtual
virtual_transport = dovecotDon't forget to reload postfix to activate the new config.
|
I'm currently trying to set up Postfix with Dovecot, but something does not seem to work the way it should be.
For some reason, mails I sent to my mail account appear in the logs, but do not become relocated into the respective mailbox. The logs are not displaying any error, so I am pretty much left without a clue where the problem might be. Additionally, I have not set up any of the servers at any point, so I can't really tell what I might be missing here.
Hope you guys can give me a hint.
Edit: I'm also using a frontend webinterface which lets me log in, but no mails can be displayed in its interface either.
tree -aps
root /var/customers/mail/webmail/mail.domain.net/server/Maildir # tree -aps
.
|-- [drwx------ 4096] .Drafts
| |-- [drwx------ 4096] cur
| |-- [-rw------- 51] dovecot-uidlist
| |-- [-rw------- 248] dovecot.index.log
| |-- [-rw------- 0] maildirfolder
| |-- [drwx------ 4096] new
| `-- [drwx------ 4096] tmp
|-- [drwx------ 4096] .Sent
| |-- [drwx------ 4096] cur
| |-- [-rw------- 51] dovecot-uidlist
| |-- [-rw------- 248] dovecot.index.log
| |-- [-rw------- 0] maildirfolder
| |-- [drwx------ 4096] new
| `-- [drwx------ 4096] tmp
|-- [drwx------ 4096] .Spam
| |-- [drwx------ 4096] cur
| |-- [-rw------- 51] dovecot-uidlist
| |-- [-rw------- 248] dovecot.index.log
| |-- [-rw------- 0] maildirfolder
| |-- [drwx------ 4096] new
| `-- [drwx------ 4096] tmp
|-- [drwx------ 4096] .Trash
| |-- [drwx------ 4096] cur
| |-- [-rw------- 51] dovecot-uidlist
| |-- [-rw------- 156] dovecot.index.log
| |-- [-rw------- 0] maildirfolder
| |-- [drwx------ 4096] new
| `-- [drwx------ 4096] tmp
|-- [drwx------ 4096] cur
|-- [-rw------- 51] dovecot-uidlist
|-- [-rw------- 8] dovecot-uidvalidity
|-- [-r--r--r-- 0] dovecot-uidvalidity.55020f8f
|-- [-rw------- 432] dovecot.index.log
|-- [-rw------- 96] dovecot.mailbox.log
|-- [drwx------ 4096] new
|-- [-rw------- 23] subscriptions
`-- [drwx------ 4096] tmp19 directories, 18 filesmail.log
Mar 13 00:53:41 v220110897556081 postfix/pickup[12736]: 06AE0736F6C5: uid=0 from=<root>
Mar 13 00:53:41 v220110897556081 postfix/cleanup[15499]: 06AE0736F6C5: message-id=<[emailprotected]>
Mar 13 00:53:41 v220110897556081 postfix/qmgr[12737]: 06AE0736F6C5: from=<[emailprotected]>, size=443, nrcpt=1 (queue active)
Mar 13 00:53:41 v220110897556081 postfix/trivial-rewrite[15500]: warning: do not list domain mail.domain.net in BOTH mydestination and virtual_mailbox_domains
Mar 13 00:53:41 v220110897556081 postfix/local[15503]: warning: database /etc/aliases.db is older than source file /etc/aliases
Mar 13 00:53:41 v220110897556081 postfix/local[15503]: 06AE0736F6C5: to=<[emailprotected]>, relay=local, delay=0.03, delays=0.02/0.01/0/0, dsn=2.0.0, status=sent (delivered to mailbox)
Mar 13 00:53:41 v220110897556081 postfix/qmgr[12737]: 06AE0736F6C5: removedpostconf -Mf
smtp inet n - - - - smtpd
pickup fifo n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail
($recipient)
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender
$recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store
${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}
dovecot unix - n n - - pipe
flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} | Postfix / Dovecot Mail appears in logs, but not in its respective folder |
iconv (or at least the 2.36 version available on Debian 12) knows about the encoding you need:
$ iconv -l | grep -i imap
UTF-7-IMAP//So:
$ printf '%s\n' 'Santé' | iconv -f UTF-8 -t UTF-7-IMAP; echo ''
Sant&AOkACg-
$ printf '%s' 'Santé' | iconv -f UTF-8 -t UTF-7-IMAP; echo ''
Sant&AOk-Note that both outputs lack a line feed (so an echo add it at the end of the result for readability), but the line feed character was encoded, thus two different results with or without LF added at the end of the string.
|
To automate the command line creation of hundreds of directories in IMAP maildirs, I would need to be able to convert UTF-8 strings to UTF-7-IMAP on the fly.
In php, I found a way to do it with a string passed as an argument, but it's not very practical, and it requires installing php.
<?php
echo mb_convert_encoding($argv[1], "UTF7-IMAP", "UTF8");
?>Iconv doesn't seem to know about UTF-7-IMAP.
I found a syntax that allows you to do the opposite:
echo "Sant&AOk-" | tr "&" "+" | iconv -f UTF-7 -t UTF-8
SantéBut it is not reversible that simply (the final '-' is missing)
echo "Santé" | iconv -f UTF-8 -t UTF-7 | tr '+' '&'
Sant&AOkI can find very little about this on the internet.
Edit:
I found this working fine:
perl -CSA -MEncode::IMAPUTF7 -le 'print Encode::IMAPUTF7::encode("IMAP-UTF-7", shift)' "Santé"but how to create a pipe?
echo "Santé" | perl ... | I need to create a pipe to convert string from UTF-8 to UTF-7-IMAP |
dovecot should work more or less out of the box. You may need to enable plaintext auth on insecure connections. Uncomment the disable_plaintext_auth line and change yes to no
disable_plaintext_auth = noInstall the package with the command sudo apt-get install dovecot-imapd.
|
I'm developing an email client and need a IMAP server without ssl to test with.
I have a bare Ubuntu 16.04 install with DigitalOceans I would like to setup as a simple imap server. It doesn't need to be able to accually send emails, I just need to be able to connect to it using the imap protocal.
What's the bare minimum configuration required?
| Simplest IMAP configuration without SSL |
I don't think that thunderbird displays it incorrectly.Gmail uses a special implementation of IMAP. In this implementation, Gmail labels become Thunderbird folders. When you apply a label to a message in Gmail, Thunderbird creates a folder with the same name as the label and stores the message in that folder. Similarly, if you move a message to a folder in Thunderbird, Gmail will create a label with the folder name and assign it to the message.You can read more here.
To achieve order you want there is useful extension Flat Folder Tree.
|
I have set up Icedove (Thunderbird) to use my gmail account with IMAP. I would like to have my IMAP folder structure
gmail.com
- Inbox
- Drafts
- Sent
- TrashHowever, the actual folder structure loos like this:With Inbox being displayed correctly, but the rest of the folders nested inside subfolder "gmail".
Is it possible to fix this?
| Thunderbird displays IMAP folders incorrectly (gmail account) |
Answer:Servname not supported for ai_socktype, disabling lmtpmeans that no entry for lmtp could be found in /etc/services.
Either add a line like:
lmtp 2003/tcp # Lightweight Mail Transport Protocol serviceto /etc/services (and make sure the file is world-readable/mode 644)
or change the cyrus config file so that the port is given in the listen part instead of lmtp:
SERVICES {
...
lmtp cmd="lmtpd -a" listen="[192.168.50.100]:2003" prefork=1 proto=tcp4
}Reference: cyrus service port numbers
|
Problem:
When starting cyrus imap with the following line in /etc/cyrus.conf:
SERVICES {
...
lmtp cmd="lmtpd -a" listen="[192.168.50.100]:lmtp" prefork=1 proto=tcp4
}to enable lmtp via tcp socket, the socket is not opened.
In the logfile the following message appears:Servname not supported for ai_socktype, disabling lmtp | cyrus imap does not start lmtp tcp socket, error message: Servname not supported for ai_socktype, disabling lmtp |
I assume everything is captured by the 'proxy-rely-server' (as you named it) serving all clients (the easy way) and it's the common client.
You probably need a couple of MRA and MDA installed on your 'proxy-rely-server'.
The MDA will act as POP/IMAP server (for local use + other clients).
The MRA (some kind of special MUA) will fetch emails from the 'real-mail-server' and store them on the MDA.
You have the choice between a lot of systems to achieve this. I just can suggest that, for now, I'm using Dovecot as MDA and Getmail as MRA.
The drawback is that all emails should be erased on the 'real-mail-server' as no syncing is -easily- possible. That means you'll have to take care of backups...
[edit following you comment]
As the 'proxy-rely-server' is NOT the common client, but some kind of server box (I guess), and as I'm a lazy guy, I can suggest you having a look at ISPConfig witch will do everything you need and a lot more. It's stable (based on debian or many other distros), very well documented, easy to install, easy to use, and very easy to maintain.
|
Classic email setup:
[REAL-MAIL-SERVER]
[IMAP/POP3 >] ---------> [EMAIL-CLIENT]
[Storage] Is there any server application (open source) that would permit to store emails and act like a rely/proxy between the client and the mail server ?
[REAL-MAIL-SERVER] [PROXY-RELY-SERVER]
[IMAP/POP3 >] ---------> [< IMAP/POP3 >] ---------> [EMAIL-CLIENT]
[Storage] [Storage] The purpose here is to keep emails data in a private location out of the main server and access it with a mobile/desktop clients (the email data would be kept at the "PROXY-RELY-SERVER" location only, client would connect to it, SMTP function is not needed). I found piler but i don't know yet if it could work like that.
Otherwise is there any email client that can act like an IMAP server for other clients to fetch emails from it? Or a simple IMAP server that can fetch other mailbox?
| Is there any IMAP/POP3 rely and archiving server application? |
I figured out how to get this working with the ProtonMail account. It turns out it is possible to save-message directly to an IMAP directory. However, while mbsync was interfacing with the ProtonMail Bridge's IMAP just fine, NeoMutt would get stuck on "Logging in..."
While trying to debug the overall issue, I had a look at mbsync's log. It didn't help me much to figure out why messages were getting duplicated, but I did notice it was using the LOGIN IMAP authentication method. So I added this line to my NeoMutt config:
set imap_authenticators = "login"That, along with the following macro, allows me to move the current message or tagged messages directly to the IMAP Archive mailbox, and I no longer get duplicates:
macro index,pager A ":set confirmappend=no\n<tag-prefix><save-message>imap://127.0.0.1:1143/Archive\n:set confirmappend=yes\n"There is still a small issue in that if the message is both marked as read and moved to Archive in the same mbsync run, the message will still appear as unread. I'm sure there must be some mbsync configuration I'm missing to solve this, but for now I will probably just change my macro to do something like this:Sync NeoMutt ($ by default), then run mbsync, ensuring the un/read states of all messages have been synced with IMAP.
Then actually run save-message.
Repeat step 1.This will be a bit slow, but if I'm tagging a bunch of messages first then hopefully it won't be too bad. Good Enough For Now™.
Regarding Gmail, I've decided to just forward all my Gmail that hasn't yet been moved to ProtonMail and let the account die. I still have a Gmail work account, but it doesn't get nearly as much use. A similar approach may well work there, and if I get annoyed enough maybe I'll give it a shot and update this answer with whether it worked.
|
I've currently got Neo/Mutt configured alongside iSync for a few different accounts. Everything syncs up and I've got the Mutt client configured roughly how I want it. However, I run into issues when using <save-message> to move a message to an Archive folder. It's different depending on the type of account:In my two Gmail accounts, if I read a message and then <save-message> to my local "All Mail" folder, the message is moved as expected. Then, when I run mbsync, my All Mail folder in Mutt shows two copies of the same message, with one marked for deletion. They are also both marked as unread, even though I had read the message before moving it. My workaround has been to just delete messages from my inbox. On the following sync, the deleted messages appear in All Mail without duplicates (but still annoyingly marked as unread).
In my ProtonMail account, I can read and then save a message to my Archive folder. On the next sync, I have a duplicate message in the Archive folder, one marked as unread and the other as read, and neither is marked for deletion. Unlike in the Gmail accounts, deleting a message from my inbox does not result in the message showing up in my Archive, so that half-measure doesn't work here.So maybe it's two separate issues but they certainly seem related. I've read multiple blog posts and scoured many dotfiles. I've seen "solutions" to the duplicate message problem such as folder hooks which delete duplicates when you enter the folder. These are not real solutions, IMO.
So I'm wondering if it's possible to tell Mutt to save a message to a remote folder, and if this would give better results. At the same time, I haven't configured Mutt for IMAP and would prefer that Mutt does no IMAP syncing, leaving that job to mbsync. I still want to use Mutt mainly to read mail that is stored locally, but I also want to teach it to move messages to remote IMAP folders.
Is this possible? Or is there a more obvious approach that I'm overlooking? In the meantime, I'm just manually marking archived messages as read, and deleting duplicates. If I could solve this problem, Mutt will be my favorite email reader by far.
| Mixing local and remote IMAP folders in Neo/Mutt and iSync? |
Dovecot is an IMAP and POP3 server. If you don't need to serve out email received locally then you don't need Dovecot.
|
I noticed a recurring error in the mail.log as well as when I run service dovecot status on a server I 'inherited' for my job (used for our production website). We have our MX records pointed to/email hosted by Gmail.
imap-login: Fatal: Couldn't parse private ssl_key: error:0906D06C:PEM routines:PEM_read_bio:no start line: Expecting PRIVATE KEYWhen I checked the config file for dovecot, it looks like it was never configured. Before I begin addressing the issue here with the SSL and dovecot, I wonder if I should just disable it.
On a new test server, which I set up rather than inherited, I installed postfix for use with our Drupal-based website and a local CRM and all mail functions seem to be working just fine there without installing dovecot and the only difference in functionality between the test and prod servers is email, but I think not even that because mx records to gmail, so do I need to run dovecot on this server?
| Do I need Dovecot on server if Gmail is hosting our email? |
Restricting this to the commonly used Linux servers:Courier
Cyrus
DovecotThey all support IMAP4, since the IMAP4rev1 RFC was defined more than 10 years ago, I don't think you'll find the older version still being used.
|
I can't find any information about the status of IMAP version 4 and how widely it's used nowadays and how widely other versions, 1, 2 and 3 are used.
So is IMAP version 4 common these days? What about other the versions 1, 2 and 3?
| IMAP -- what are the most popular versions used these days? |
SASL is just a set of authentication mechanisms, which are common to many protocols. It is the modern alternative to protocol specific authentication mechanisms (the LOGIN command in the case of IMAP) and is not the culprit here.
Dovecot has a configuration variable, which disables both LOGIN (you can see the LOGINDISABLED capability in dovecot's banner) and all plaintext SASL mechanisms, unless the connection is encrypted. You can switch it off by modifying:
disable_plaintext_auth = yesto
disable_plaintext_auth = noin /etc/dovecot/conf.d/10-auth.conf and reloading dovecot.
|
I have a laptop running KMail 5.7.3 on Bionic Beaver. I just got a new computer with Eoan Ermine and am trying to set up Kmail 5.11.3 to use the same IMAP server. I set up the IMAP account, tried to view my email, and got this error:
The server for account "IMAP Account 1" refused the supplied username and password. Do you want to go to the settings, have another attempt at logging in, or do nothing?SASL(-4): no mechanism available: No worthy mechs foundI set it to plain text (which is safe since the server and both clients are in the same house) and ran Wireshark and captured this:
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED] Dovecot ready.
A000001 LOGOUT
* BYE Logging out
A000001 OK Logout completed.It didn't even try to log in! How do I make it work?
| How do I disable SASL in KMail? |
The image is embedded as a base64 code blockThis is the current standard way to actually store an image within an email.
You should read about Multipurpose Internet Mail Extensions, or MIME for short.The image is linked via an url such as imap://[emailprotected]/inboxnameThis is not so much storing an image as linking (i.e. referring to) some content in another email message. In principle, any applicable type of URL could be used for that.
But using an imap: URL only works if the recipient of the message can have access to the mailbox the URL refers to. It might be a viable strategy for deduplication of content within a single organization where all the users can be authenticated in a mutually trustworthy way and can give each other permissions to access the content. But in a wider scale, between different organizations, this is unlikely to be reliable or even feasible.
I think having a mail client follow random imap: URLs is not a good idea. Linking to an IMAP mailbox with hostile content might trick the mail client into bypassing some anti-malware and/or anti-spam protections. If a link points to a mailbox that is under someone else's control, it means that someone may be able to modify the linked content at will, which might allow some types of mail fraud or other deceptive practices.
And even the attempt to access the linked document (using any URL scheme at all) indicates that a particular email is being read... if those links are individualized, it might lead into an information disclosure, allowing the administrator of the link destination to determine which email recipients exist in your organization, which could be a privacy issue, and may be preparation for further social engineering attacks.
In email, many things are technically possible, but not necessarily good things to actually do.
|
On a mail server running postfix and dovecot with imap, mails are stored plain on the disk. Within those mails, we have discovered that mails are stored in 2 ways. The image is embedded as a base64 code block
The image is linked via an url such as imap://[emailprotected]/inboxnameWhich one is the correct way to store those images? Is there a standard to follow?
I am asking because the second way causes severe issues with our mail client.
| How are images in imap mails supposed to be stored? |
Solved using Thunderbird:Install Thunderbird
Link Hotmail account to Thunderbird using the same IMAP settings as in fetchmail
In Thunderbird install ImportExportTools extension
Create new folder in Local Folders, right click on the new folder, select ImportExportTools and import mbox file, select the user's mailbox file (e.g. /var/mail/myusername)
Drag the new local folder into the linked Hotmail account in Thunderbird, wait until all messages are uploaded back to the server |
I used fetchmail to access my Hotmail/Outlook mailbox. It downloaded every message inside Inbox, but now that folder is empty at the server. Now I see that fetchmail deletes messages by default:-k | --keep : Keep retrieved messages on the remote mailserver. Normally, messages are deleted from the folder on the
mailserver after they have been retrieved. Specifying the keep option
causes retrieved messages to remain in your folder on the mailserverI expected to find them in the Deleted folder or the Archive folder at the server but they are not there. Is there a way to find them (maybe something like a Imap/Deleted folder) or a way to reupload them?
| Recover/reupload messages deleted by fetchmail |
A hardware interrupt is not really part of CPU multitasking, but may drive it.Hardware interrupts are issued by hardware devices like disk, network cards, keyboards, clocks, etc. Each device or set of devices will have its own IRQ (Interrupt ReQuest) line. Based on the IRQ the CPU will dispatch the request to the appropriate hardware driver. (Hardware drivers are usually subroutines within the kernel rather than a separate process.)The driver which handles the interrupt is run on the CPU. The CPU is interrupted from what it was doing to handle the interrupt, so nothing additional is required to get the CPU's attention. In multiprocessor systems, an interrupt will usually only interrupt one of the CPUs. (As a special cases mainframes have hardware channels which can deal with multiple interrupts without support from the main CPU.)The hardware interrupt interrupts the CPU directly. This will cause the relevant code in the kernel process to be triggered. For processes that take some time to process, the interrupt code may allow itself to be interrupted by other hardware interrupts.
In the case of timer interrupt, the kernel scheduler code may suspend the process that was running and allow another process to run. It is the presence of the scheduler code which enables multitasking.Software interrupts are processed much like hardware interrupts. However, they can only be generated by processes which are currently running.Typically software interrupts are requests for I/O (Input or Output). These will call kernel routines which will schedule the I/O to occur. For some devices the I/O will be done immediately, but disk I/O is usually queued and done at a later time. Depending on the I/O being done, the process may be suspended until the I/O completes, causing the kernel scheduler to select another process to run. I/O may occur between processes and the processing is usually scheduled in the same manner as disk I/O.The software interrupt only talks to the kernel. It is the responsibility of the kernel to schedule any other processes which need to run. This could be another process at the end of a pipe. Some kernels permit some parts of a device driver to exist in user space, and the kernel will schedule this process to run when needed.
It is correct that a software interrupt doesn't directly interrupt the CPU. Only code that is currently running code can generate a software interrupt. The interrupt is a request for the kernel to do something (usually I/O) for running process. A special software interrupt is a Yield call, which requests the kernel scheduler to check to see if some other process can run.Response to comment:For I/O requests, the kernel delegates the work to the appropriate kernel driver. The routine may queue the I/O for later processing (common for disk I/O), or execute it immediately if possible. The queue is handled by the driver, often when responding to hardware interrupts. When one I/O completes, the next item in the queue is sent to the device.Yes, software interrupts avoid the hardware signalling step. The process generating the software request must be a currently running process, so they don't interrupt the CPU. However, they do interrupt the flow of the calling code.
If hardware needs to get the CPU to do something, it causes the CPU to interrupt its attention to the code it is running. The CPU will push its current state on a stack so that it can later return to what it was doing. The interrupt could stop: a running program; the kernel code handling another interrupt; or the idle process. |
I am not sure if I understand the concept of hardware and software interrupts.
If I understand correctly, the purpose of a hardware interrupt is to get some attention of the CPU, part of implementing CPU multitasking. Then what issues a hardware interrupt? Is it the hardware driver process?
If yes, where is the hardware driver process running? If it is running on the CPU, then it won't have to get attention of the CPU by hardware interrupt, right? So is it running elsewhere?
Does a hardware interrupt interrupt the CPU directly, or does it first contact the kernel process and the kernel process then contacts/interrupts the CPU?On the other hand, I think the purpose of a software interrupt is for a process currently running on a CPU to request some resources.What are the resources? Are they all in the form of running processes? For example, do CPU driver process and memory driver processes represent CPU and memory resources? Do the driver process of the I/O devices represent I/O resources? Are other running processes that the process would like to communicate with also resources?
If yes, does a software interrupt contact the processes (which represent the resources) indirectly via the kernel process? Is it right that unlike a hardware interrupt, a software interrupt never directly interrupts the CPU, but instead, it interrupts/contacts the kernel process? | What are software and hardware interrupts, and how are they processed? |
Here's a high-level view of the low-level processing. I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail.
When an interrupt occurs, the processor looks if interrupts are masked. If they are, nothing happens until they are unmasked. When interrupts become unmasked, if there are any pending interrupts, the processor picks one.
Then the processor executes the interrupt by branching to a particular address in memory. The code at that address is called the interrupt handler. When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers).
The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts.
The interrupt handler must run quickly, because it's preventing any other interrupt from running. In the Linux kernel, interrupt processing is divided in two parts:The “top half” is the interrupt handler. It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory.
The “bottom half” does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled.As usual on this topic, for more information, read Linux Device Drivers; chapter 10 is about interrupts.
|
I just know that Interrupt is a hardware signal assertion caused in a processor pin. But I would like to know how Linux OS handles it.
What all are the things that happen when an interrupt occurs?
| How is an Interrupt handled in Linux? |
Ctrl+C sends SIGINT. The conventional action for SIGINT is to return to a program's toplevel loop, cancelling the current command and entering a mode where the program waits for the next command. Only non-interactive programs are supposed to die from SIGINT.
So it's natural that Ctrl+C doesn't kill ed, but causes it to return to its toplevel loop. Ctrl+C aborts the current input line and returns to the ed prompt.
The same goes for less: Ctrl+C interrupts the current command and brings you back to its command prompt.
For historical reasons, ed ignores SIGQUIT (Ctrl+\). Normal applications should not catch this signal and allow themselves to be terminated, with a core dump if enabled.
|
The program ed, a minimal text editor, cannot be exited by sending it an interrupt through using Ctrl-C, instead printing the error message "?" to the console. Why doesn't ed just exit when it receives the interrupt? Surely there's no reason why a cryptic error message is more useful here than just exiting. This behavior leads many new users into the following sort of interaction:$ ed
hello
?
help
?
exit
?
quit
?
^C
?
^C
?
?
?
^D
$ su
# rm -f /bin/edSuch a tragic waste—easily avoidable if ed simply agreed to be interrupted.
Another stubborn program exhibiting similar behavior is less which also doesn't appear to have much reason to ignore C-c. Why don't these programs just take a hint?
| Why can't ed be exited with C-c? |
All modern operating systems support multitasking. This means that the system is able to execute multiple processes at the same time; either in pseudo-parallel (when only one CPU is available) or nowadays with multi-core CPUs being common in parallel (one task/core).
Let's take the simpler case of only one CPU being available. This means that if you execute at the same time two different processes (let's say a web browser and a music player) the system is not really able to execute them at the same time. What happens is that the CPU is switching from one process to the other all the time; but this is happening extremely fast, thus you never notice it.
Now let's assume that while those two processes are executing, you press the reset button (bad boy). The CPU will immediately stop whatever is doing and reboot the system. Congratulations: you generated an interrupt.
The case is similar when you are programming and want to ask for a service from the CPU. The difference is that in this case you execute software code -- usually library procedures that are executing system calls (for example fopen for opening a file).
Thus, 1 describes two different ways of getting attention from the CPU.
Most modern operating systems support two execution modes: user mode and kernel mode. By default an operating system runs in user mode. User mode is very limited. For example, all I/O is forbidden; thus, you are not allowed to open a file from your hard disk. Of course this never happens in real, because when you open a file the operating system switches from user to kernel mode transparently. In kernel mode you have total control of the hardware.
If you are wondering why those two modes exist, the simplest answer is for protection. Microkernel-based operating systems (for example MINIX 3) have most of their services running in user mode, which makes them less harmful. Monolithic kernels (like Linux) have almost all their services running in kernel mode. Thus a driver that crashes in MINIX 3 is unlikely to bring down the whole system, while this is not unusual in Linux.
System calls are the primitive used in monolithic kernels (shared data model) for switching from user to kernel mode. Message passing is the primitive used in microkernels (client/server model). To be more precise, in a message passing system programmers also use system calls to get attention from the CPU. Message passing is visible only to the operating system developers. Monolithic kernels using system calls are faster but less reliable, while microkernels using message passing are slower but have better fault isolation.
Thus, 2 mentions two different ways of switching from user to kernel mode.
To revise, the most common way of creating a software interrupt, aka trap, is by executing a system call. Interrupts on the other hand are generated purely by hardware.
When we interrupt the CPU (either by software or by hardware) it needs to save somewhere its current state -- the process that it executes and at which point it did stop -- otherwise it will not be able to resume the process when switching back. That is called a context switch and it makes sense: Before you switch off your computer to do something else, you first need to make sure that you saved all your programs/documents, etc so that you can resume from the point where you stopped the next time you'll turn it on :)
Thus, 3 explains what needs to be done after executing a trap or an interrupt and how similar the two cases are. |
I am reading the Wikipedia article for process management. My focus is on Linux. I cannot figure out the relation and differences between system call, message passing and interrupt, in their concepts and purposes. Are they all for processes to make requests to kernel for resources and services?
Some quotes from the article and some other:There are two possible ways for an OS to regain control of the
processor during a program’s execution in order for the OS to
perform
de-allocation or allocation:The process issues a system call (sometimes called a
software
interrupt); for example, an I/O request occurs requesting to
access a
file on hard disk.
A hardware interrupt occurs; for example, a key was pressed
on
the keyboard, or a timer runs out (used in pre-emptive
multitasking).There are two techniques by which a program executing in user mode
can
request the kernel's services:
* System call
* Message passingan interrupt is an asynchronous signal indicating the need for
attention or a synchronous event in software indicating the need
for a
change in execution.
A hardware interrupt causes the processor to save its state of
execution and begin execution of an interrupt handler. Software
interrupts are usually implemented as instructions in the
instruction
set, which cause a context switch to an interrupt handler similar
to a
hardware interrupt. | What is the relationship between system calls, message passing, and interrupts? |
INT 0x80h is an old way to call kernel services (system functions). Currently, syscalls are used to invoke these services as they are faster than calling the interrupt. You can check this mapping in kernel's Interrupt Descriptor Table idt.c and in line 50 in the irq_vectors.h file.
The important bit that I believe answers your question is the header of that last file, where you can see how interrupt requests (IRQs) are organized.
This is the general layout of the IDT entries:
Vectors 0 ... 31 : system traps and exceptions - hardcoded events
Vectors 32 ... 127 : device interrupts
Vector 128 : legacy int80 syscall interface
Vectors 129 ... INVALIDATE_TLB_VECTOR_START-1 except 204 : device interrupts
Vectors INVALIDATE_TLB_VECTOR_START ... 255 : special interruptsIt really does not matter if it is by electrical means or software means. Whenever an interrupt is triggered, the kernel looks for its ID in the IDT and runs (in kernel mode) the associated interrupt handler. As they have to be really fast, they normally set some info to be handled later on by a softirq or a tasklet. Read chapter 2 (fast read...) of the Unreliable Guide To Hacking The Linux Kernel
Let me recommend also reading this really good and thorough answer at stackoverflow to Intel x86 vs x64 system call question, where INT 0x80h, sysenter and syscall are put in context...
I wrote my own (so very modest and still under construction) self learning page about interrupts and signals to help me understand the relation of signals and traps with interrupts (for instance SIGFPE - divide by zero).
|
I'm trying to get a deeper understanding of how system calls and hardware interrupts are implemented, and something that keeps confusing me is how they differ with respect to how they're handled. For example, I am aware that one way a system call (at least used to) be initiated is through the x86 INT 0x80 instruction.Does the processor handle this the exact same way as if, say, a hardware peripheral would have interrupted the CPU? If not, at what point do they differ? My understanding is they both index the IDT, just with different indices in the vector.In that same sense, my understanding is there's this idea of a softirq to handle the "bottom half" processing, but I only see this form of "software interrupt" in reference to being enqueued to run by physical hardware interrupts. Do system call "software interrupts" also trigger softirqs for processing? That terminology confuses me a bit as well, as I've seen people refer to system calls as "software interrupts" yet softirqs as "software interrupts" as well. | Do system calls actually "interrupt" the CPU the same way that hardware interrupts do? |
In simple terms, you can think of make as having a (possibly large) number of steps, where each step takes a number of files as input and creates one file as output.
A step might be "compile file.c to file.o" or "use ld to link main.o and file.o into program". If you interrupt make with CtrlC, then the currently executing step will be terminated which will (or should) remove the output file it was working on. There are usually not any "half-ready binaries" left behind.
When you restart make, it will look at the timestamps of all the input and output files and rerun the steps where:an input file has a newer timestamp than the output file
the output file does not existThis generally means that if a step takes a long time to run (it's rare on modern computers, but the ld step for large programs could easily take many minutes when make was designed), then stopping and restarting make will start that step over from the beginning.
The reality of your average Makefile is considerably more complicated than the above description, but the fundamentals are the same.
|
I know that I can interrupt a make process anytime without having to recompile the entire source tree again. As I know, make only compiles a target if it's not compiled yet, or the source code is modified after the last compilation.
But if I interrupt make, there will surely be one or more (depending on the concurrency level) half-ready binaries. What does it do with them the next time I run make? Or does it finish the current target when I press Ctrl+C to avoid partly compiled binaries?
| How does make continue compilation? |
Interruption of a system call by a signal handler occurs only in the case of various blocking system calls, and happens when the system call is interrupted by a signal handler that was explicitly established by the programmer.
Furthermore, in the case where a blocking system call is interrupted by a signal handler, automatic system call restarting is an optional feature. You elect to automatically restart system calls by specifying the SA_RESTART flag when establishing the signal handler. As stated in (for example) the Linux signal(7) manual page:
If a signal handler is invoked while a system call or library
function call is blocked, then either: * the call is automatically restarted after the signal handler
returns; or * the call fails with the error EINTR. Which of these two behaviors occurs depends on the interface and
whether or not the signal handler was established using the
SA_RESTART flag (see sigaction(2)). As hinted by the last sentence quoted above, even when you elect to use this feature, it does not work for all system calls, and the set of system calls for which it does work varies across UNIX implementations. The Linux signal(7) manual page notes a number of system calls that are automatically restarted when using the SA_RESTART flag, but also goes on to note various system calls that are never restarted, even if you specify that flag when establishing a handler, including:
* "Input" socket interfaces, when a timeout (SO_RCVTIMEO) has been
set on the socket using setsockopt(2): accept(2), recv(2),
recvfrom(2), recvmmsg(2) (also with a non-NULL timeout argu‐
ment), and recvmsg(2). * "Output" socket interfaces, when a timeout (SO_RCVTIMEO) has
been set on the socket using setsockopt(2): connect(2), send(2),
sendto(2), and sendmsg(2). * File descriptor multiplexing interfaces: epoll_wait(2),
epoll_pwait(2), poll(2), ppoll(2), select(2), and pselect(2). * System V IPC interfaces: msgrcv(2), msgsnd(2), semop(2), and
semtimedop(2).For these system calls, manual restarting using a loop of the form described in APUE is essential, something like:
while ((ret = some_syscall(...)) == -1 && errno == EINTR)
continue;
if (ret == -1)
/* Handle error */ ; |
I am reading APUE and the Interrupted System Calls chapter confuses me.
I would like to write down my understanding based on the book, please correct me.A characteristic of earlier UNIX systems was that if a process caught
a signal while the process was blocked in a ‘‘slow’’ system call, the
system call was interrupted. The system call returned an error and
errno was set to EINTR. This was done under the assumption that since
a signal occurred and the process caught it, there is a good chance
that something has happened that should wake up the blocked system
call.So it's saying that the earlier UNIX systems has a feature: if my program uses a system call, it would be interrupted/stopped, if at any time the program catches a signal. (Does default handler also count as a catch?)
For example, if I have a read system call, which reads 10GB data, when it's reading, I send any one of signals(e.g.kill -SIGUSR1 pid), then read would fail and return. To prevent applications from having to handle interrupted system
calls, 4.2BSD introduced the automatic restarting of certain
interrupted system calls. The system calls that were automatically
restarted are ioctl, read, readv, write, writev, wait, and waitpid. As
we’ve mentioned, the first five of these functions are interrupted by
a signal only if they are operating on a slow device; wait and waitpid
are always interrupted when a signal is caught. Since this caused a
problem for some applications that didn’t want the operation restarted
if it was interrupted, 4.3BSD allowed the process to disable this
feature on a per-signal basis.So before automatic restarting was introduced, I had to handle interrupted system call on my own. I need write code like:The problem with interrupted system calls is that we now have to
handle the error return explicitly. The typical code sequence
(assuming a read operation and assuming that we want to restart the
read even if it’s interrupted) would be:again:
if ((n = read(fd, buf, BUFFSIZE)) < 0) {
if (errno == EINTR)
goto again; /* just an interrupted system call */
/* handle other errors */
}But nowadays I don't have to write this kind of code, beacause of the automatic restarting mechanism.So if I my understanding are all correct, what/why should I care about interrupted system call now..? It seems the system/OS handles it automatically.
| What is interrupted system call? |
On a multiprocessor/multicore system, you might find a daemon process named irqbalance. Its job is to adjust the distribution of hardware interrupts across processors.
At boot time, when the firmware hands over the control of the system to the kernel, initially just one CPU core is running. The first core (usually core #0, sometimes called the "monarch CPU/core") initially takes over all the interrupt handling responsibilities from the firmware before initializing the system and starting up the other CPU cores. So if nothing is done to distribute the load, the core that initially started the system ends up with all the interrupt handling duties.
https://www.kernel.org/doc/Documentation/IRQ-affinity.txt suggests that on modern kernels, all CPU cores are allowed to handle IRQs equally by default. But this might not be the optimal solution, as it may lead to e.g. inefficient use of CPU cache lines with frequent IRQ sources. It is the job of irqbalance to fix that.
irqbalance is not a kernel process: it's a standalone binary /usr/sbin/irqbalance that can run either in one-shot mode (i.e. adjust the distribution of interrupts once as part of the boot process, and exit) or as a daemon. Different Linux distributions can elect to use it differently, or to omit it altogether. It allows easy testing and implementation of arbitrarily complex strategies for assigning IRQs to processors by simply updating the userspace binary.
It works by using per-IRQ /proc/irq/%i/smp_affinity files to control which IRQs can be handled by each CPU. If you're interested in details, check the source code of irqbalance: the actual assignment of IRQ settings happens in activate.c.
|
I've been reading Linux Kernel Development and there's something that's not entirely clear to me -- when an interrupt is triggered by the hardware, what's the criterion to decide on which CPU to run the interrupt handling logic?
I could imagine it having to be always the same CPU that raised the IO request, but as the thread is for all purposes now sleeping there would not really be that much of a point in doing that.
On the other hand, there may be timing interrupts (for the scheduler, for instance) that need to be raised. On an SMP system are they always raised on the same core (let's say, #0) or they're always pretty much raised at any core?
How does it actually work?
Thanks
| What's the policy determining which CPU handles which interrupt in the Linux Kernel? |
What shell is used is a concern as different shells handle job control differently (and job control is complicated; job.c in bash presently weighs in at 3,300 lines of C according to cloc). pdksh 5.2.14 versus bash 3.2 on Mac OS X 10.11 for instance show:
$ cat code
pkill yes
yes >/dev/null &
pid=$!
echo $pid
sleep 2
kill -INT $pid
sleep 2
pgrep yes
$ bash code
38643
38643
$ ksh code
38650
$ Also relevant here is that yes performs no signal handling so inherits whatever there is to be inherited from the parent shell process; if by contrast we do perform signal handling—
$ cat sighandlingcode
perl -e '$SIG{INT} = sub { die "ouch\n" }; sleep 5' &
pid=$!
sleep 2
kill -INT $pid
$ bash sighandlingcode
ouch
$ ksh sighandlingcode
ouch
$ —the SIGINT is triggered regardless the parent shell, as perl here unlike yes has changed the signal handling. There are system calls relevant to signal handling which can be observed with things like DTrace or here strace on Linux:
-bash-4.2$ cat code
pkill yes
yes >/dev/null &
pid=$!
echo $pid
sleep 2
kill -INT $pid
sleep 2
pgrep yes
pkill yes
-bash-4.2$ rm foo*; strace -o foo -ff bash code
21899
21899
code: line 9: 21899 Terminated yes > /dev/null
-bash-4.2$ We find that the yes process ends up with SIGINT ignored:
-bash-4.2$ egrep 'exec.*yes' foo.21*
foo.21898:execve("/usr/bin/pkill", ["pkill", "yes"], [/* 24 vars */]) = 0
foo.21899:execve("/usr/bin/yes", ["yes"], [/* 24 vars */]) = 0
foo.21903:execve("/usr/bin/pgrep", ["pgrep", "yes"], [/* 24 vars */]) = 0
foo.21904:execve("/usr/bin/pkill", ["pkill", "yes"], [/* 24 vars */]) = 0
-bash-4.2$ grep INT foo.21899
rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0
rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0
rt_sigaction(SIGINT, {SIG_IGN, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0
--- SIGINT {si_signo=SIGINT, si_code=SI_USER, si_pid=21897, si_uid=1000} ---
-bash-4.2$ Repeat this test with the perl code and one should see that SIGINT is not ignored, or also that under pdksh there is no ignore being set as there is in bash. With "monitor mode" turned on like it is in interactive mode in bash, yes is killed.
-bash-4.2$ cat monitorcode
#!/bin/bash
set -m
pkill yes
yes >/dev/null &
pid=$!
echo $pid
sleep 2
kill -INT $pid
sleep 2
pgrep yes
pkill yes
-bash-4.2$ ./monitorcode
22117
[1]+ Interrupt yes > /dev/null
-bash-4.2$ |
I have the following in a script:
yes >/dev/null &
pid=$!
echo $pid
sleep 2
kill -INT $pid
sleep 2
ps aux | grep yesWhen I run it, the output shows that yes is still running by the end of the script. However, if I run the commands interactively then the process terminates successfully, as in the following:
> yes >/dev/null &
[1] 9967
> kill -INT 9967
> ps aux | grep yes
sean ... 0:00 grep yesWhy does SIGINT terminate the process in the interactive instance but not in the scripted instance?
EDIT
Here's some supplementary information that may help to diagnose the issue. I wrote the following Go program to simulate the above script.
package mainimport (
"fmt"
"os"
"os/exec"
"time"
)func main() {
yes := exec.Command("yes")
if err := yes.Start(); err != nil {
die("%v", err)
} time.Sleep(time.Second*2) kill := exec.Command("kill", "-INT", fmt.Sprintf("%d", yes.Process.Pid))
if err := kill.Run(); err != nil {
die("%v", err)
} time.Sleep(time.Second*2) out, err := exec.Command("bash", "-c", "ps aux | grep yes").CombinedOutput()
if err != nil {
die("%v", err)
}
fmt.Println(string(out))
}func die(msg string, args ...interface{}) {
fmt.Fprintf(os.Stderr, msg+"\n", args...)
os.Exit(1)
}I built it as main and running ./main in a script, and running ./main and ./main & interactively give the same, following, output:
sean ... 0:01 [yes] <defunct>
sean ... 0:00 bash -c ps aux | grep yes
sean ... 0:00 grep yesHowever, running ./main & in a script gives the following:
sean ... 0:03 yes
sean ... 0:00 bash -c ps aux | grep yes
sean ... 0:00 grep yesThis makes me believe that the difference has less to do on Bash's own job control, though I'm running all of this in a Bash shell.
| Why doesn't SIGINT work on a background process in a script? |
Check /proc/interrupts to find if one of or more interrupts occur excessively. Hint: Several thousand interrupts per second are no cause for alarm.
Excessive interrupts (aka interrupt storms) can have multiple reasons, one of them even being hardware issues (noisy interrupt line).
To further answer your question we need to know what OS on what hardware you use.
|
man ksoftirqd indicates that: If ksoftirqd is taking more than a tiny percentage of CPU time, this
indicates the machine is under heavy soft interrupt load.I'm working with a Debian Wheezy system under generally high system utilization in which ksoftirqd processes utilizes excessive cpu and disk resources for a short period of time. During that time, the system operates at a snails pace.
How can one begin to understand what the root cause is for this ksoftirqd resource utilization spikes?
| How to debug causes of excessive ksoftirqd resource usage? |
A page fault occurs when a memory access fails because the MMU lookup for the virtual address ended in an invalid descriptor or in a descriptor indicating a lack of permissions (e.g. write attempt to a read-only page). When a page fault occurs, the processor performs a few actions; the details are specific to each processor architectures but the gist is the same:Switch to a privileged mode (e.g. kernel mode).
Set some registers to indicate, at least, the nature of the fault, and the program counter and processor mode at the point of the fault.
Jump to a particular address in memory, indicated by a register or itself looked up at a particular location in memory: the address of the page fault handler.To give an example, on an (32-bit) ARM processor:The dfsr register is set to a value that describes the fault (whether it was due to a read or write, to a processor instruction or a DMA, etc.).
The dfar register is set to the virtual address that was the target of the access that caused the fault.
The processor switches to abort mode (one of the kernel-level privileged modes).
The lr register is set to the program counter at the time of the fault, and the spsr register is set to the program status register (cpsr, the one that contains the mode bits, among other things) at the time of the fault.
The sp and cpsr registers are banked: they are restored from the value last set in abort mode.
The execution jumps to the abort vector, one of the exception vectors.The code of the page fault handler is part of the kernel of the operating system. Its job is to analyze the cause of the fault and to do something about it. It can consult the special-purpose registers that provide information about the nature of the fault, and if needed it can also inspect the instruction that the program was executing. It can also look up the descriptor in the MMU table; invalid descriptors can sometimes encode information such as the location of a page in swap space. The kernel knows which task is currently executing by looking at the value of a global variable or register that it updates on each context switch. Here are a few common behaviors on a page fault:The data about the process's memory mappings indicate that the page is in swap. The kernel finds a spare physical page, or obtains one by removing a page that contained disk cache, or obtains one by first saving its content to swap. Then it loads the data from the swap to this physical page, and changes the MMU table so that the virtual address that caused the fault is now attached to that physical page in the process's MMU map. Finally, the kernel arranges to switch back to the process at the point of the instruction that caused the fault; this time the instruction will be executed successfully.
The data about the process's memory mappings indicate that the page is a copy-on-write page, and a write access was attempted. Rather similarly to the previous case, the kernel obtains a spare physical page, copies data to it (here, from the page that was read-only), changes the MMU descriptor, and arranges for the process to execute the instruction again.
The data about the process's memory mappings indicate that the page is not mapped, or that it doesn't have the requisite permissions. In that case the kernel delivers a SIGSEGV signal (segmentation fault) to the process: the execution of the process resumes at the signal handler rather than at the original location, but the original location is saved on the stack. If the process has no handler for SIGSEGV, it is terminated.It is not in general possible to determine that an exception is about to happen, except by knowing the virtual memory configuration and making checks before memory accesses. The normal flow of operation is that the reason for the page fault is recorded by the processor when the page fault happens.
|
When a page fault occurs in a Linux system, the interrupt-handler has to figure out the reason why the page fault happened. But how ?Is there anywhere a special number for that !? If yes, where is that number logged ?
Is it possible to know the reason for the page-fault before raising the exception ? Eg.Step-1 Look for the reason by the CPU
Step-2 raise the exception | What happens after a page fault? |
Your understanding so far is correct, but you miss most of the complexity that's built on that. The processing in the kernel happens in several layers, and the keypress "bubbles up" through the layers.
The USB communication protocol itself is a lot more involved. The interrupt handler routine for USB handles this, and assembles a complete USB packet from multiple fragments, if necessary.
The key press uses the so-called HID ("Human interface device") protocol, which is built on top of USB. So the lower USB kernel layer detects that the complete message is a USB HID event, and passes it to the HID layer in the kernel.
The HID layer interprets this event according to the HID descriptor it has required from the device on initialization. It then passes the events to the input layer. A single HID event can generate multiple key press events.
The input layer uses kernel keyboard layout tables to map the scan code (position of the key on the keyboard) to a key code (like A) and interprets Shift, Alt, etc. The result of this interpretation is made available via /dev/input/event* to userland processes. You can use evtest to watch those events in real-time.
But processing is not finished here. The X Server (responsible for graphics) has a generic evdev driver that reads events from /dev/input/event* devices, and then maps them again according to a second set of keyboard layout tables (you can see those partly with xmodmap and fully via the XKBD extension). This is because the X server predates the kernel input layer, and in earlier times had drivers to handle mouse and PS/2 keys directly.
Then the X server sends a message to the X client (application) containing the keyboard event. You can see those messages with the xev application. LibreOffice will process this event directly, VIM will be running in an xterm which will process the event, and (you guessed it) again add some extra processing to it, and finally pass it to VIM via stdin.
Complicated enough?
|
I'm currently learning about the Linux Kernel and OSes in general, and while I have found many great resources concerning IRQs, Drivers, Scheduling and other important OS concepts, as well as keyboard-related resources, I am having a difficult time putting together a comprehensive overview of how the Linux Kernel handles a button press on a keyboard. I'm not trying to understand every single detail at this stage, but am rather trying to connect concepts, somewhat comprehensively.
I have the following scenario in mind:I'm on a x64 machine with a single processor.
There're a couple of processes running, notably the Editor VIM (Process #1) and say LibreOffice (Process #2).
I'm inside VIM and press the a-key. However, the process that's currently running is Process #2 (with VIM being scheduled next).This is how I imagine things to go down right now:The keyboard, through a series of steps, generates an electrical signal (USB Protocol Encoding) that it sends down the USB wire.
The signal gets processed by a USB-Controller, and is send through PCI-e (and possibly other controllers / buses?) to the Interrupt Controller (APIC). The APIC triggers the INT Pin of the processor.
The processor switches to Kernel Mode and request an IRQ-Number from the APIC, which it uses as an offset into the Interrupt Descriptor Table Register (IDTR). A descriptor is obtained, that is then used to obtain the address of the interrupt handler routine. As I understand it, this interrupt handler was initially registered by the keyboard driver?
The interrupt handler routine (in this case a keyboard handler routine) is invoked.This brings me to my main question: By which mechanism does the interrupt handler routine communicate the pressed key to the correct Process (Process #1)? Does it actually do that, or does it simply write the pressed key into a buffer (available through a char-device?), that is read-only to one process at a time (and currently "attached" to Process #1)? I don't understand at which time Process #1 receives the key. Does it process the data immediately, as the interrupt handler schedules the process immediately, or does it process the key data the next time that the scheduler schedules it?When this handler returns (IRET), the context is switched back to the previously executing process (Process #2). | How does a keyboard press get processed in the Linux Kernel? |
do_IRQ: 1.55 No irq handler for vectorThis message can be found in Linux kernel source file arch/x86/kernel/irq.c, so it's about x86-specific handling of interrupts.
/*
* do_IRQ handles all normal device IRQ's (the special
* SMP cross-CPU interrupts have their own specific
* handlers).
*/
__visible unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
{
struct pt_regs *old_regs = set_irq_regs(regs);
struct irq_desc * desc;
/* high bit used in ret_from_ code */
unsigned vector = ~regs->orig_ax; entering_irq(); /* entering_irq() tells RCU that we're not quiescent. Check it. */
RCU_LOCKDEP_WARN(!rcu_is_watching(), "IRQ failed to wake up RCU"); desc = __this_cpu_read(vector_irq[vector]); if (!handle_irq(desc, regs)) {
ack_APIC_irq(); if (desc != VECTOR_RETRIGGERED && desc != VECTOR_SHUTDOWN) {
pr_emerg_ratelimited("%s: %d.%d No irq handler for vector\n",
__func__, smp_processor_id(),
vector);
} else {
__this_cpu_write(vector_irq[vector], VECTOR_UNUSED);
}
} exiting_irq(); set_irq_regs(old_regs);
return 1;
}So, the first number (before the dot) is the ID of the reporting processor, and the 55 is the interrupt vector as you discovered. The message could be avoided if the IRQ vector was in the state VECTOR_SHUTDOWN or VECTOR_RETRIGGERED.
According to arch/x86/kernel/apic/vector.c the state VECTOR_SHUTDOWN indicates an interrupt vector that was intentionally cleared (e.g. a hardware device was stopped and its driver unloaded in a controlled fashion).
The VECTOR_RETRIGGERED is set in fixup_irqs() at the end of arch/x86/kernel/irq.c and seems to be related to CPU hotplugging, or more specifically marking a CPU as offline.
So, neither of those states should be applicable on a regular PC at boot time.
Your idea of a fixed correspondence between interrupt vector numbers and causes of interrupts would have been valid with the ISA bus architecture of the original IBM PC... and quite a while after that.
But somewhere in the era of 486 processors and the first Pentiums, an APIC (Advanced Programmable Interrupt Controller) was introduced. It was one of the components enabling multiple processors to coexist in PC architecture. It opened the way to increase the number of available hardware interrupt lines from 15 (the pair of 8259 interrupt controllers like in the first IBM PC-AT), eventually up to 224 discrete hardware interrupts. This enabled the design of more complex systems, and also helped in making truly auto-configurable buses possible.
Essentially, either the system firmware or the operating system is supposed to configure the device on the bus to use a particular interrupt line, and then to program the APIC to route the interrupt signal to an available interrupt vector in the CPU. This requires knowledge on how the bus is actually wired on the motherboard, so in practice this is almost exclusively done by the system firmware, and many of the exceptions are specifically to patch up firmware bugs.
The PCI bus originally had its interrupts mapped to ISA-style interrupts, but when APICs became integrated in CPUs, this limitation could be removed, reducing IRQ latency and allowing more complex systems to be built. With PCI bus version 2.2, Message Signaled Interrupts (MSI) were introduced, which allowed discrete hardware interrupts without dedicated physical interrupt lines. In PCI Express, MSI became the standard way to handle interrupts.
So... it looks like your system's hardware includes an active source of interrupts routed to IRQ vector 55, but Linux currently has no driver loaded to handle it. Since the PCI configuration space is readable in a standard fashion and Linux does read it, any devices on the PCI bus (or on PCIe links) should have been detected, identified and their interrupt configuration should be known.
It also might be that the source of IRQ's is something that is not a PCI device, i.e. a platform device, for example something that is part of the system chipset or connected to them using some non-PCI-compatible interface. All such devices should be described by the firmware ACPI tables... but apparently in your case, this source of these IRQ's isn't.
My conclusion is that this might be a firmware bug: see if HP offers a BIOS update for your system. (At this moment, HP's support downloads page for the Pavilion Elite m9660de seems to be failing to load for me.)
According to this thread in Ubuntu forums it could also be a hardware bug in the VIA chipset: if your system has this chipset, adding the boot option pci=nomsi,noaer in GRUB might fix it.
If your current kernel has debugfs support and CONFIG_GENERIC_IRQ_DEBUGFS kernel option enabled, you might get a lot of information on the state of IRQ vector 55 with the following commands as root:
mount -t debugfs none /sys/kernel/debug
grep "Vector.*55" /sys/kernel/debug/irq/irqs/*This should tell you which files in that directory mention "Vector: 55". Reading those files should tell you basically everything the kernel knows about that interrupt vector.
|
I'm trying to boot/install Linux for learning purposes, using an older PC (HP Pavilion Elite m9660de). The following message is the first thing that shows up when booting (Ubuntu and Fedora, both from a bootable USB-stick and a fresh install):do_IRQ: 1.55 No irq handler for vector
do_IRQ: 2.55 No irq handler for vector
do_IRQ: 3.55 No irq handler for vectorThe boot process will stall there for a very long time (like 15 minutes), and eventually continue.I'm not asking to get support for this concrete problem, but rather to understand how to interpret such a message.
I found out in the kernel code of do_IRQ that 55 is a vector. As I understand it, this is more or less the number of an interrupt, corresponding to a memory location containing the address of the interrupt handler.
I would have expected that there is a fixed correspondence between these numbers and the events that cause the interrupt. Where can I find documentation on this? Is this Linux-specific, processor-specific or motherboard-specific?
| How to deduce the nature of an interrupt from its number? |
Set-up a trap handling SIGINT (Ctrl+C).
In your case that would be something like:
trap "kill -2 $pid1 $pid2" SIGINTJust place it before the wait command.
|
I'm starting two child processes from bash script and waiting both for completion using wait command:
./proc1 &
pid1=$!
echo "started proc1: ${pid1}"./proc2 &
pid2=$!
echo "started proc2: ${pid2}"echo -n "working..."
wait $pid1 $pid2
echo " done"This script is working fine in normal case: it's waiting both processes for completion and exit after it. But sometimes I need to stop this script (using Ctrl+C). But when I stop it, child processes are not interrupted. How can I kill them altogether with main script?
| Interrupt child processes from bash script on Ctrl+C |
The Linux timer interrupt handler doesn’t do all that much directly. For x86, you’ll find the default PIT/HPET timer interrupt handler in arch/x86/kernel/time.c:
static irqreturn_t timer_interrupt(int irq, void *dev_id)
{
global_clock_event->event_handler(global_clock_event);
return IRQ_HANDLED;
}This calls the event handler for global clock events, tick_handler_periodic by default, which updates the jiffies counter, calculates the global load, and updates a few other places where time is tracked.
As a side-effect of an interrupt occurring, __schedule might end up being called, so a timer interrupt can also lead to a task switch (like any other interrupt).
Changing CONFIG_HZ changes the timer interrupt’s periodicity. Increasing HZ means that it fires more often, so there’s more timer-related overhead, but less opportunity for task scheduling to wait for a while (so interactivity is improved); decreasing HZ means that it fires less often, so there’s less timer-related overhead, but a higher risk that tasks will wait to be scheduled (so throughput is improved at the expense of interactive responsiveness). As always, the best compromise depends on your specific workload. Nowadays CONFIG_HZ is less relevant for scheduling aspects anyway; see How to change the length of time-slices used by the Linux CPU scheduler?
See also How is an Interrupt handled in Linux?
|
I have two questions about the Linux kernel.
Specifically, does anybody know exactly, what Linux does in the timer interrupt? Is there some documentation about this?
And what is affected when changing the CONFIG_HZ setting, when building the kernel?
Thanks in advance!
| Linux timer interrupt |
The keypress generates an interrupt, just like you figured out. The interrupt is processed by an interrupt handler; which handler depends on the type of hardware, e.g. USB keyboard or PS/2 keyboard. The interrupt handler reads the key code from the hardware and buffers it. From the buffer the character is picked up by the tty driver, which, in the case of Ctrl-C recognizes it as the interrupt character and sends a SIGINT to the foreground process group of the terminal. See n_tty.c.
Note that the tty driver is only involved in "terminal"-type (command line) interfaces, like the Linux console, serial terminals (/dev/ttyS*), and pseudo ttys. GUI systems (X11, Wayland implementations) handle input devices differently.
|
I'm studying the linux kernel right know with O'Reilly's Understanding Linux Kernel and lately covered the signal and interrupt handling chapter sticking to some basic 2.4 linux version and diving into code as far as I can understand.
Yet, I couldn't explain to myself nor finding an answer elsewhere, what is the instruction flow that occurs when, let's say, a ctrl + c is pressed for a process which runs in the shell.
what I did figured out so far:once keyboard pressed APIC raises IRQ line to the cpu
if interrupts are not maskable, cpu loads the corresponding int. handler from IDT
than, some critical int. handler code is invoked ,handling further the char pressed from the keyboard device's register in the APIC to other registersfrom here it's vague for me.
I do understand though, that interrupt handling is not in the process context while exception is, so it was easy to figure out how exception updates current->thread.error_code and current->thread.trap_no finally invoking force_sig. Yet, once an interrupt handler is executed, as in the example above, how does it finally gets into context with the desirable process and generating the signal?
| How does keyboard interrupt ends up as process signal |
The Linux kernel is reentrant (like all UNIX ones), which simply means that multiple processes can be executed by the CPU. He doesn't have to wait till a disk access read is handled by the deadly slow HDD controller, the CPU can process some other stuff until the disk access is finished (which itself will trigger an interrupt if so).
Generally, an interrupt can be interrupted by an other interrupt (preemption), that's called 'Nested Execution'. Depending on the architecture, there are still some critical functions which have to run without interruption (non-preemptive) by completely disabling interrupts. On x86, these are some time relevant functions (time.c, hpet.c) and some xen stuff.
There are only two priority levels concerning interrupts: 'enable all interrupts' or 'disable all interrupts', so I guess your "high priority interrupt" is the second one. This is the only behavior the Linux kernel knows concerning interrupt priorities and has nothing to do with real-time extensions.
If an interruptible interrupt (your "low priority interrupt") gets interrupted by an other interrupt ("high" or "low"), the kernel saves the old execution code of the interrupted interrupt and starts to process the new interrupt. This "nesting" can happen multiple times and thus can create multiple levels of interrupted interrupts. Afterwards, the kernel reloads the saved code from the old interrupt and tries to finish the old one.
|
I was reading "Linux device drivers, 3rd edition" and don't completely
understand a part describing interrupt handlers. I would like to clarify:are the interrupt handlers in Linux nonpreemptible?
are the interrupt handlers in Linux non-reentrant?I believe I understand the model of Top/Bottom halves quite well, and
according to it the interrupts are disabled for as long as the TopHalf
is being executed, thus the handler can't be re-entered, am I right?
But what about high priority interrupts? Are they supported by vanilla Linux or
specific real-time extensions only? What happens if a low priority interrupt
is interrupted by high priority one?
| re-entrency of interrupts in Linux |
The I/O device (controller) is busy transferring data from the device buffer to the device. It goes from idle to transferring. This is the peak for I/O device. It goes back to idle when the transfer is done, until the next request.
The CPU curve shows a peak when the transfer is done because the CPU is notified by the device (through an interrupt).
|
I'm studying the book 'Operating System Concepts' 9th edition.
In the first chapter, part 1.2.1 computer system operation, I can't understand the figure 1.3:Can any one make a quick interpretation on this for me? especially about the peaks of this graph?
| The interrupt timeline for a single process doing output |
If all the following holdMore than one CPU in the VM
The VM is pinned (via the host) to specific dedicated CPUs (not shared with other VMs) with a 1-1 mapping of VM CPUs to host CPUs
The VM has dedicated (e.g. via passthrough) access to storage/network hardwarethen in-VM IRQ rebalancing still makes sense.
Without multiple CPUs within the VM, in-VM IRQ rebalancing obviously serves no purpose. For the other points things become tricky because the "real" CPUs your VM is sitting on can be shuffling around underneath it and the VM's OS doesn't know which of the virtual interrupts are going to be handled by which of the real CPUs. Additionally, if the real CPU is being shared between multiple VMs you don't actually know what other work it is doing or when the virtual CPU is going to get around to being serviced so the "virtual rebalancing" could actually be making things worse...
PS: Two years ago isn't that old! Some information is timeless...
PPS: VMCI is vestigial and isn't supported on ESXi 6 or later.
|
I have a Linux farm in VMware Enterprise 5.5. The VMs are (mostly) 64-bit amd64 Debian Jessie servers with SysVinit and not systemd. The VMs have open-vm-tools installed.
I paravirtualized their Ethernet and disk controllers. Paravirtual drivers are ones where the virtualization platform does
not have to emulate another device, such as an Intel E1000 NIC or a
LSI Logic SAS SCSI adapter. These paravirtual drivers essentially cut
the middleman out by ditching the emulation layer, which usually
results in significant performance increases.As lspci | egrep "PVSCSI|VMXNET" can show, ethernet and disks are now paravirtualized:
3:00.0 Serial Attached SCSI controller: VMware PVSCSI SCSI Controller (rev 02)
0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)Doing cat to /proc/interrupts is is easy to show there are interrupts associated to them and to the functionalities paravirtualization depends on:
56: 6631557 0 PCI-MSI 1572864-edge vmw_pvscsi
57: 72647654 0 PCI-MSI 5767168-edge eth0-rxtx-0
58: 44570979 0 PCI-MSI 5767169-edge eth0-rxtx-1
59: 0 0 PCI-MSI 5767170-edge eth0-event-2
60: 1 0 PCI-MSI 129024-edge vmw_vmci
61: 0 0 PCI-MSI 129025-edge vmw_vmcivmw_vmci: The Virtual Machine Communication Interface. It enables
high-speed communication between host and guest in a virtual
environment via the VMCI virtual device.It seems obvious using itop they are fairly used monitoring a moderately busy SSL-enabled web front end:
INT NAME RATE MAX
57 [ 0 0 ] 142 Ints/s (max: 264)
58 [ 0 0 ] 155 Ints/s (max: 185)
59 [ 0 0 ] 119 Ints/s (max: 419)
60 [ 0 0 ] 133 Ints/s (max: 479)I am quite sure irqbalance is not needed in VMs with CPU affinity, and in single core VMs. The two servers where we have CPU affinity manually configured have indeed special needs, as in general cases, the literature says irqbalance is supposed to do a better job.
So my question is, when is irqbalance necessary to distribute interrupt load via the different CPUs for multi-CPU Linux VMs?
Note: I already consulted some papers, and a related (dated) serverfault post, they are not very clear about it. I also found an academic paper voicing similar concerns for Xen. vBalance: Using Interrupt Load Balance to Improve I/O Performance for SMP Virtual Machines
| When is `irqbalance` needed in a Linux VM under VMware? |
Ctrl+C (control character intr): It will send SIGINT signal to a process and usually application gets abort but the application can handle this signal. For example you can handle a signal with signal() function in C Language.
Ctrl+Z (control character susp): It will send SIGTSTP signal to a process to put it in background and like SIGINT it can be handle.
The process will not kill immediately with Ctrl+C if it has wait I/O and you have to wait to finish its I/O and then the application will terminate from memory.
But Ctrl+Z will pause your process and its I/Os. Technically the operating system will not give it CPU time and if you kill the background process, it may lose some I/O and data.
For force killing a process you have to SIGKILL or signal number 9 which is most powerful signal - the operating system will kill it immediately, but you may lose data, as the program will have no way to react to this signal.
|
To kill a hanging job, I use Ctrl+c, to send an interrupt signal to the hanging job. Sometimes this wouldn't stop the job, at least not immediately. I can then use Ctrl+z to suspend the job and then kill it with kill %1 (or whatever the number of the job is).
Why is Ctrl+z more powerful than Ctrl+c in interfering with a job? Sometimes, not even Ctrl+z works. Are there other key combinations one could use in such situations?
I tried stty -a, but none of the other listed key combinations do anything for me.
| Ctrl+c and Ctrl+z to interrupt/suspend jobs |
I can try to trap the Interrupt at a lower level and inform the gtkmm application.No, that is a kernel space activity. Fortunately, the kernel does report the outcome of certain events via interfaces accessible from userland.
It's a little ambiguous in your question whether you want to detect when a block device is attached, or when a filesystem is mounted (although it seems to be more the former). If your system uses automounting (they usually do by default), it will mount filesystems from block devices when they are attached, otherwise you have to do it manually (e.g., with mount).
Either way, you want to poll/parse/scan a kernel file node based interface. I've done this before in an application (a C++ GTK one, in fact) that tracks both attached block devices and mounted filesystems via /dev/ and /etc/mtab. This is a straightforward, language agnostic method. Some people find it a little distasteful at first because it involves reading files/directories, but these interfaces do not actually exist on disk, so there is no heavy I/O overhead, and remember: read() is a system call. Reading the file nodes in kernel interfaces amounts to the same thing as a listAttachedDevices() style API, except again, it is language agnostic. When you go to read from these nodes, the kernel passes you the information they represent directly.
The /dev directory lists attached devices as special device node files -- e.g. /dev/sda. These are added and removed by the kernel as devices are plugged in and out, so if you track it by polling at intervals (say every 5 seconds), you can detect what's new and what's gone. The only complication here is that since there's no callback style API, you have to create your own thread for this if you do want a continuous check (perhaps why gparted requires you to click Refresh Devices instead).
A probably better alternative to /dev would be the stuff in /sys/block. Note that there is a significant difference between /dev and /proc (see below) or /sys in so far as the nodes in the latter contain information about things such as devices, whereas the nodes in /dev are an actual connection to the device (so if you scan /dev, don't bother reading the individual files, just note they exist).
/etc/mtab now-a-days is a symlink (see also the -s switch in man ln) to /proc/self/mounts; /proc is a major swiss army knife kernel interface (see man proc). This lists mounted filesystems; if you use automounting things will appear and disappear from there when stuff is plugged in/out. The information in /proc and /sys is usually in the form of ASCII text, so you can look at these files with cat, etc, and parse it with string(stream) functions.
WRT to other kinds of devices, such as a fingerprint scanner, /sys is a good place to start -- /sys/dev contains a block and a char directory. Block devices are usually storage; the information on them can be randomly accessed. Char devices exchange information with the system in a stream, which would include things like scanners, cameras, HID stuff (human interface device, e.g. mice and keyboards). I notice that gtkmm does have some high level stuff for attached HID things, presumably since these are significant in interacting with the GUI.
|
How to programmatically detect when a device raises an interrupt? This can be when a device is connected or disconnected.
And also this case: for example: when a finger is held over a fingerprint scanner, an interrupt is raised. How to detect and possibly trap this interrupt?
I want to write an application using Gtkmm such that when an event occurs like a CD being inserted or a pendrive being plugged in, I catch the interrupt these devices raises and use it to do something in my application, involving these devices.
If it cannot be done in Gtkmm, can I trap the Interrupt at a lower level and inform the Gtkmm application?
I was checking out how GParted behaves. It was initially showing /dev/sda and when I connected my pendrive, it automatically opened files application. When I checked GParted, pendrive was not present in the drop down menu of devices. It appeared only when I selected “Refresh Devices” in the GParted menu or Ctrl+R.
| How to programmatically detect when a device raises an interrupt? |
Interrupts are handled by the operating system, threads (or processes, for that matter) aren't even aware of them.
In the scenario you paint:Your thread issues a read() system call; the kernel gets the request, realizes that the thread won't do anything until data arrives (blocking call), so the thread is blocked.
Kernel allocates space for buffers (if needed), and initiates the "find the block to be read, request for that block to be read into the buffer" dance.
The scheduler selects another thread to use the just freed CPU
All goes their merry way, until...
... an interrupt arrives from the disk. The kernel takes over, sees that this marks the completion of the read issued before, and marks the thread ready. Control returns to userspace.
All goes their merry way, until...
... somebody yields the CPU by one of a thousand reasons, and it just so happens the just freed CPU gets assigned to the thread which was waiting for data.Something like that, anyway. No, the CPU isn't asigned to the waiting thread when an interrupt happens to signal completion of the transfer. It might interrupt another thread, and execution probably resumes that thread (or perhaps another one might be selected).
|
I've been reading a bit about threads and interrupts. And there is a sections which says that parallel programing using threads is simpler because we don't have to worry about interrupts.
However, what is the mechanism in which signals the release of the blocking system call if not an interrupt?
Example
I read i file in my thread which use a blocking system call to read the file from the disk.
During that time, other threads are running.
At some point the file is ready to be read from the hard disk.
Does it notify the processor of this via a hardware interrupt, so that it can do a context switch ti the thread which asked for the file?
| Are threads which are executing blocking system calls awoken by interrupts? |
The timer ISR doesn't call schedule() directly. It ends up calling update_process_times() so the scheduler process accounting information is up to date.
The scheduler is eventually called when returning to userspace. If the kernel is preemptive, it is also called when returning from the timer interrupt to kernelspace.
As an example, imagine a process A which issues a syscall which is interrupted by a device-generated interrupt, that is then interrupted by a timer interrupt: process A userspace → process A kernelspace → device ISR → timer ISR
syscall device IRQ timer IRQWhen the timer ISR ends, it returns to another ISR, that then returns to kernelspace, which then returns to userspace. A preemptive kernel checks if it needs to reschedule processes at every return. A non-preemptive kernel only does that check when returning to userspace.
In ARM land, the codepath goes broadly like:An IRQ received while in userspace ends up calling __irq_usr, while an IRQ received while in SVC mode ends up calling __irq_svc. IRQs should not be received while in other processor modes.
In __irq_svc, after handling the IRQ, if the kernel is preemptive, preemption is not disabled, and a reschedule is needed, the kernel jumps to svc_preempt, which calls preempt_schedule_irq, which calls schedule. Otherwise, no reschedule is done.
Eventually, the CPU will return to userspace, either from an IRQ handler (__irq_usr → ret_to_user_from_irq), or from a syscall (vector_swi → ret_fast_syscall). There, the kernel checks whether there is work to be done, and if a reschedule is needed, schedule is called. |
When a timer interrupt occurred the ISR is called to service the interrupt.
Is it okay to assume that every timer interrupt ends with a call to the scheduler on which process should continue running next?
Can that be generalized and say that every interrupt must end in a scheduler call?
| Are time interrupts always followed by a scheduler call? |
As of GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu) and I am not using a VM:
echo $(yes) exists the shell and does not freeze the system, and:
ls /*/../*/../*/../*/../*/returns
bash: /bin/ls: Argument list too longBut as a rule, when you dealing with something that could get all the resources of a system is better to set limits before running, if you know that a process could be a cpu hog you could start it with cpulimit or run renice.
If you want to limit the processes that are already started, you will have to do it one by one by PID, but you can have a batch script to do that like the one below:
#!/bin/bash
LIMIT_PIDS=$(pgrep tesseract) # PIDs in queue replace tesseract with your name
echo $LIMIT_PIDS
for i in $LIMIT_PIDS
do
cpulimit -p $i -l 10 -z & # to 10 percent processes
doneIn my case pypdfocr launches the greedy tesseract.
Also in some cases were your CPU is pretty good you can just use a renice like this:
watch -n5 'pidof tesseract | xargs -L1 sudo renice +19' |
Warning: DO NOT attempt the commands listed in this question without knowing their implications.
Sorry if this is a duplicate. I am surprised to learn that a command as simple as
echo $(yes)freezes my computer (actually it is lagging the computer very badly rather than freezing, but the lag is bad enough to make one think it has frozen). Typing CtrlC or CtrlZ right after typing this command does not seem to help me recover from this mistyped command.
On the other hand
ls /*/../*/../*/../*/../*/is a well-known vulnerability that also lags the computer badly to the best and crashes the computer to the worst.
Note that these commands are quite different from the well-known fork bombs.
My question is: Is there a way to interrupt such commands which build up huge amount of shell command line options immediately after I start to execute them in the shell?
My understanding is that since shell expansion is done before the command is executed, the usual way to interrupt a command does not work because the command is not even running when the lag happens, but I also want to confirm that my understanding is correct, and I am extremely interested to learn any way to cancel the shell expansion before it uses too much memory.
I am not looking for how the kernel works at low memory. I am also not looking for SysRq overkills that may be helpful when the system already lags terribly. Nor am I looking for preventative approaches like imposing a ulimit on memory. I am looking for a way that can effectively cancel a huge shell expansion process from within the shell itself before it lags the system. I don't know whether it is possible. If it is impossible as commented, please also leave an answer indicating that, preferably with explanations.
I have chosen not to include any system-specific information in the original question because I want a general answer, but in case this matters, here are the information about my system: Ubuntu 16.04.4 LTS with gnome-terminal and bash 4.3.48(1), running a x86_64 system. No virtual machines involved.
| Interrupt shell command line expansion |
"The kernel is not a process."
This is pure terminology. (Terminology is important.) The kernel is not a process because by definition processes exist in userland. But the kernel does have threads.
"If a program hits some exception handler that requires long-running synchronous processing before it can start running again (e.g. hits a page fault that requires a disk read)".
If a userland process executes a machine instruction which references an unmapped memory page then:The processor generates a trap and transitions to ring 0/supervisor mode. (This happens in hardware.)
The trap handler is part of the kernel. Assuming that indeed the memory page must be paged in from disk, it will put the process in the state of uninterruptible sleep (this means it saves the process CPU state in the process table and it modifies status field in the process entry in the table of processes), finds a victim memory page, initiates the I/O to page out the victim and page in the requested page, and invokes the scheduler (another part of the kernel) to switch userland context to another process which is ready to run.
Eventually, the I/O completes. This generates an interrupt. In response to the interrupt, the processor invokes a handler and transitions to ring 0/supervisor mode. (This happens in hardware.)
The interrupt handler is part of the kernel. It clears the waiting for I/O state of the process which was waiting for the memory page and marks it ready to run. It then invokes the scheduler to switch userland context to a process which is ready to run.In general, the kernel runs:In response to a hardware trap or interrupt; this includes timer interrupts.
In response to an explicit system call from a user process.Most of the time, the processor is at ring 3/user mode and executes instructions from some userland process. It transitions to ring 0/supervisor mode (where the kernel lives) when an userland process makes a syscall (for example, because it wants to do some input/output operation) or when the hardware generates a trap (invalid memory access, division by zero, and so on) or when an interrupt request is received from the hardware (I/O completion, timer interrupt, mouse move, packet arrived on the network interface, etc.)
To answer the question in the title, "how does the kernel scheduler know how to pre-empt a process": the kernel handles timer interrupts. If, when a timer interrupt arrives, the schduler notices that the currently running userland process has exhausted its quantum then the process is put at the end of the running queue and another process is resumed. (In general, the scheduler takes care to ensure that all userland processes which are ready to run receive processor time fairly.)
|
As far as I understand, the kernel is not a process, but rather a set of handlers that can be invoked from the runtime of another progress (or by the kernel itself via a timer or something similar?)
If a program hits some exception handler that requires long-running synchronous processing before it can start running again (e.g. hits a page fault that requires a disk read), how does the kernel identify that the context should be switched? In order to achieve this, it would seem another process would need to run?
Does the kernel spawn a process that takes care of this by intermittently checking for processes in this state? Does the process that invokes the long-running synchronous handler let the kernel know that it should switch contexts until the handler is complete (e.g. the disk read completes)?
| How does the kernel scheduler know how to pre-empt a process? |
There's a fantastic pair of articles on LWN that describe how syscalls work on Linux: "Anatomy of a system call", part 1 and part 2.
|
Are the system calls like fork(), exit() saved in some kind of function pointer table , just like the Interrupt Descriptor Table ? where does my OS go when I call my fork() or exit() ?
I guess this image explains it, but I would like an explanation from a person who really knows what's happening , I don't want knowledge based on my own assumptions. | Is there any Syscall table just like Interrupt Table? |
Normally, the NIC will only interrupt the CPU if it needs to send the received packet to the system. In non-promiscuous mode, this would only be for packets addressed to its MAC address, the broadcast address ff:ff:ff:ff:ff:ff, or a multicast address to which it has been subscribed. It also does validation before sending the packet to the CPU: the normal Ethernet CRC check, and IP/TCP/UDP checksums if the NIC has that capability and the driver has enabled this offloading.
Some NICs have a limited number of multicast subscription addresses; if this is exceeded, it will send all multicast packets to the CPU, and the OS has to discard the ones it doesn't care about.
|
While I know that lot of packet processing(CRC calculations, packet segmentation handling, etc) can be offloaded to NIC, then does each packet still cause an interrupt to CPU? Is there a difference if NIC is in promiscuous mode?
| Does each network packet cause an interrupt to CPU? |
ping -D localhost 2>&1 | (trap '' INT; exec sed -u 's/^\[\([0-9]*\.[0-9]*\)\]\(.*$\)/echo "[`date -d @\1 +"%Y-%m-%d %H:%M:%S"`] \2"/e') | tee -a -i ping.logCalling trap '' INT tells the shell to ignore SIGINT. The exec is optional but nice to have, since the subshell process is no longer necessary after the trap.
|
In the following chain of piped commands, when an interrupt is sent with Ctrl-C, ping is able to print its summary statistics before exiting, as long as tee has the -i (ignore interrupts) flag:
ping -D localhost 2>&1 | tee -a -i ping.logHowever, with another command in the chain, ping's summary does not get printed:
ping -D localhost 2>&1 | sed -u 's/^\[\([0-9]*\.[0-9]*\)\]\(.*$\)/echo "[`date -d @\1 +"%Y-%m-%d %H:%M:%S"`] \2"/e' | tee -a -i ping.logHow can the above be made to print the summary?
Does sed have an option to ignore interrupts? In general how can interrupts be handled gracefully with piped commands?
| How to ignore interrupts with piped commands |
count gives the total number of times the IRQ fired, modulo 100,000; spurious gives the number of unhandled events in recent memory; and last_unhandled stores the jiffies at which the last unhandled event occurred (displayed in milliseconds since the kernel booted).
The purpose of these is to track spurious interrupts and allow them to be taken into account if they occur too frequently. When a spurious interrupt occurs, the current time (in jiffies) is compared with the last unhandled time, and the spurious counter is only incremented if the previous spurious interrupt was recent enough. So occasional spurious interrupts won’t affect the system, whereas frequent spurious interrupts will eventually result in the IRQ being disabled (along with a message in the kernel logs):If 99,900 of the previous 100,000 interrupts have not been handled
then assume that the IRQ is stuck in some manner. Drop a diagnostic
and try to turn the IRQ off. |
Here is the output from cat /proc/irq/79/spurious:
count 28
unhandled 0
last_unhandled 0 msWhat are these parameters indicating here — count, last_unhandled? Is this count indicating the number of times this interrupt did not get noticed?
| What does `/proc/irq/.../spurious` contain? |
The non-numeric entries in /proc/interrupts correspond to arch-specific, non-device-related interrupts.
On x86, the IDT layout is described in arch/x86/include/asm/irq_vectors.h:Vectors 0 ... 31 : system traps and exceptions - hardcoded events
Vectors 32 ... 127 : device interrupts
Vector 128 : legacy int80 syscall interface
Vectors 129 ... LOCAL_TIMER_VECTOR-1
Vectors LOCAL_TIMER_VECTOR ... 255 : special interruptsThe arch-specific interrupts are handled by IDT entries from 0 to 31 and from 129 to 255, with the local timer interrupt the first in the latter range. So when you see 0 in /proc/interrupts, it’s IDT entry 32; when you see NMI, it’s entry 2; etc. The IDT itself is set up in arch/x86/kernel/idt.c.
|
cat /proc/interrupts shows a bunch of IRQs such as NMI and LOC. The per-line comments in the output give clear explanation, but if they do not have a numeric IRQ number, how does the x86 CPU respond to them, in terms of entries in the Interrupt Descriptor Table?
| What are the non-numeric IRQs in /proc/interrupts? |
It seems like some hardware timer would be necessary?Yes, the kernel relies on hardware to generate an interrupt at regular intervals. On PCs, this was historically the 8253/8254 programmable interval timer, or an emulation thereof, then the local APIC timer, then the HPET.
Current Linux kernels can be built to run “tickless” when possible: the kernel will program timers to only fire when necessary, and if a given CPU is running a single process, that may well be “never”. In most cases, dynamic ticks are used, so the kernel sets timers up to fire at varying intervals depending on its requirements — fewer interrupts means fewer wake-ups, which means idle CPUs can be kept in low-power modes for longer periods, which saves energy.
|
In my previous question How does the kernel scheduler know how to pre-empt a process? I was given an answer to how pre-emption occurs.
Now I am wondering, how does the kernel scheduler know that a timeslice has passed? I read up on the hardware timer solution which makes sense to me, but then I read that most current operating systems (e.g. Windows, Linux, etc.) do not use hardware timers, but rather software timers.
How can software timers be used to pre-empt a process once it has taken up its timeslice (e.g. it did not pre-empt itself.) It seems like some hardware timer would be necessary?
| How does the kernel scheduler know a timeslice has passed? |
For the userspace data collection program, what is wrong with an infinite loop? As long as you are using the poll system call, it should be efficient: https://stackoverflow.com/questions/30035776/how-to-add-poll-function-to-the-kernel-module-code/44645336#44645336 ?
Permanent data storage
I'm not sure what is the best way to do it, why don't you just write to a file from userland on the poll? I suppose your concern is that if too much data arrives, data would be lost, is that so?
But I doubt the limiting factor would be kernel to userland communication in that case, but rather the slowness of the permanent storage device, so doing it on userland won't make any difference I think. In any case, the kernel only solution has a high profile question at: https://stackoverflow.com/questions/1184274/how-to-read-write-files-within-a-linux-kernel-module and I don't think you will get a better solution here.
Disable interrupts
Are you sure that it would make any difference, especially considering that the bottleneck is likely going to be? I would expect that if your device is actually producing a large number of interrupts, then those would dominate any other interrupts in any case. Is it worth risking messing up the state of other hardware? Do the specs of your hardware device suggest that it could physically provide a much larger data bandwidth that what you currently have?
I don't know how to do it myself, but if you want an answer, your best bet is to make a separate question with title "How to disable all interrupts from a Linux kernel module?". LDD2 mentions the cli() function http://www.xml.com/ldd/chapter/book/ch09.html but it seems that it was deprecated: https://notes.shichao.io/lkd/ch7/#no-more-global-cli That text then suggests local_irq_disable and local_irq_save.
I would also try to hack it up with whatever method you find to disable the interrupts, and see if it gets any more efficient before looking further if a nice method exists.
On an emulator, a quick:
static int myinit(void)
{
pr_info("hello init\n");
unsigned long flags;
local_irq_save(flags);
return 0;
}fails with:
returned with disabled interruptsapparently coming from v4.16 do_one_initcall, so there is a specialized error handling for that!
I then tried naively doing it from a worker thread:
static int work_func(void *data)
{
unsigned long flags;
local_irq_save(flags);
return 0;
}static int myinit(void)
{
kthread = kthread_create(work_func, NULL, "mykthread");
wake_up_process(kthread);
return 0;
}but still then I can't observe any effect, so the interrupts must be being enabled by something else, as can be inferred from:
watch -n 1 grep i8042 /proc/interruptswhich keeps updating tty or muse / keyboard interrupts.
Same from other entry points such as fops, or if I try a raw asm("cli"). We will need some more educated approach.
|
I have been playing around kernel programming for a while and want to create this simple data acquiring interface with some custom hardware. For portability and reusability, I do the whole thing on my Raspberry Pi.
The challenging part of the project is having a high speed ADC (parallel) connected to GPIO's and having a kernel module that uses hardware interrupt from ADC to acquire each sample and store it inside a buffer which is then accessible via chardevice.
My current setup (that works) is as follows:I have a userspace C program that is controlling my hardware through SPI. If I send a required command, it starts acquiring analogue data and sends them to the ADC.
Whenever ADC finishes conversion, it pusts corresponding signal to 'low' on a GPIO and I get interrupt inside the kernel module (bound to that GPIO). The ISR collects the value of 12 other GPIO's (it's a 12-bit ADC) and puts it into a buffer that is then accessed through /dev/mydevice.
I have another separate userspace program that runs a never-ending while loop, reading from /dev/mydevice and in turn writes into 'out_data.dat' (an userspace file).
With this crude setup (2 userspace programs and kernel module loaded) I can write over 130 000 samples into my file per second (without missing anything).I now want to see how much faster I can make it, there are 2 things to consider:Is the setup I have outline above the 'usual' way how something like this would be done? I read everywhere that direct file I/O is not advised from kernel so I am not doing it. Surely though, it should be possible to write it into some "permanent" location during the ISR. This seems to me like a common problem, trying to get data from some hardware into computer using interrupts.
Without changing my setup above, is there any way how to disable other interrupts to make it as smooth as possible? During the data acquisition I do not really need anything, only some sort of a way how to stop it. Any other interrupts (wireless, monitor refresh etc...) can be disabled as data acquisition is only to be run for a few minutes. Afterwards, everything will resume and more demanding python code can be run to analyze and visualize the data (at least that's my simple view of it). | Saving data from kernel module into userspace |
Yes, I had the same issue on the same machine. Applied below option and now the dmesg is not showing that error anymore.
From here: https://bbs.archlinux.org/viewtopic.php?id=124908Try to disable hpet with hpet=disable via the kernel command line. It will fall back to a less nice (but not seemingly broken) clocksource if available.Search web on how to add kernel boot option.
For grub, here is the article I used: https://www.howtoforge.com/tutorial/kernel-boot-parameter-edit/
Try it first at boot time, by adding a temporary boot option (explained in the howtoforge article).
|
I have this error that flooding my syslog every day. As a solution on arch linux forum is hpet=disable in kernel command line, but I think it is a bad idea and I searching for another solution. Is there any other?
| kernel: hpet1: lost 19 rtc interrupts |
Under your apparently x86_PC architecture :IRQ 0 is the interrupt line associated to the first timer (Timer0) of the Programmable Interval Timer. It is delivered by the IO-APIC to the boot cpu (cpu0) only.
This interrupt is also known as the scheduling-clock interrupt or
scheduling-clock tick or simply tick:
If the NO_HZ kernel configuration knob is not set (or under linux kernel versions < 3.10) This interrupt would be programmed to fire periodically at a HZ frequency.
If NO_HZ is set then the PIT will work in its one-shot mode
Used at early boot times, it can still serve as the scheduling clock tick and for updating system time unless some better (*1) clocksource is found available.
It will anyway serve for cpu time accounting if TICK_CPU_ACCOUNTING is set as part of the kernel configuration.LOC are the interrupts associated with the local APIC timer.
which should be enabled to fire after some tedious initialization. (see hereabove link)
Then, depending on the cpu hardware capabilities to keep this clocksource stable in iddle times and depending on kernel's configuration and boot command line parameters it will replace the PIC interrupt for triggering miscellaneous scheduler's operations, precise cpu time accounting and system time keeping.
|
When I do cat /proc/interrupts on my multicore x86_64 desktop PC (kernel 3.16) I see this:
0: 16 0 IO-APIC-edge timer
LOC: 529283 401319 Local timer interruptsWhen I do cat /proc/interrupts on my multicore x86_64 laptop (kernel 3.19) I see this:
0: 1009220 0 IO-APIC-edge timer
LOC: 206713 646587 Local timer interruptsWhen I saw this difference, I asked myself what the difference between those two is?
I hope someone can explain this rather thoroughly, the explanation given here is not very detailed and does not explain why my desktop PC does not use timer, but my laptop does.
| What is the difference between Local timer interrupts and the timer? |
You're barking up the wrong tree. Having the interrupts go to both CPUs would make performance worse, not better. For one thing, it would mean the software decoder would constantly be interrupted. For another, it would mean the interrupt code would be less likely to be hot in cache. There are many other reasons this would make things worse.
|
As you can see below, nvidia is sharing the intrerrupt and the interrupt is using only CPU0, how can I change the interrupt for nvidia, and how can I make it use both CPU's ?
Here is an article describing the second question, I can change between CPU0 and CPU1 by modifing smp_affinity, but did not understand how can I set it to use both CPU's.
Acording to this blog setting smp_affinity to 3 should use both CPU0 and CPU1. Actualy in my case, it uses CPU0 (behaving like it was set to 1). Setting it to 2 uses CPU1.
radu@radu-work:~$ cat /proc/interrupts
CPU0 CPU1
0: 79 0 IO-APIC-edge timer
1: 9 17152 IO-APIC-edge i8042
4: 2 0 IO-APIC-edge
6: 5 0 IO-APIC-edge floppy
7: 0 0 IO-APIC-edge parport0
8: 1 0 IO-APIC-edge rtc0
9: 0 0 IO-APIC-fasteoi acpi
12: 694613 0 IO-APIC-edge i8042
16: 1233922 0 IO-APIC-fasteoi uhci_hcd:usb3, ahci, nvidia
17: 3961 168757 IO-APIC-fasteoi uhci_hcd:usb4, pata_jmicron
18: 0 0 IO-APIC-fasteoi ehci_hcd:usb1, uhci_hcd:usb7
19: 59 0 IO-APIC-fasteoi ata_piix, ata_piix, uhci_hcd:usb6
22: 819 6915 IO-APIC-fasteoi HDA Intel
23: 2 0 IO-APIC-fasteoi ehci_hcd:usb2, uhci_hcd:usb5, ethradu@radu-work:~$ sudo cat /proc/irq/16/smp_affinity
1root@radu-work:~# uname -a
Linux radu-work 2.6.32-32-generic #62-Ubuntu SMP Wed Apr 20 21:54:21 UTC 2011 i686 GNU/LinuxThank you.
EDIT:
I am triyng to get my Linux box to play HD movies (at least 720). I have an nvidia 66xx series, Linux version Ubuntu 11.04, I have nvidia proprietary drivers instaled, but they do not suport hardware acceleration (and video decoding) for old harware (just 8xxx series and above) so the decoding is done in software. When I try to see a HD movie the image frozes for a few seconds, works a couple of seconds then frozes again. The CPU usage cougth my attention, nvidia drivers were using just one CPU, so I thouth that if I can make nvidia use both CPU's maybe I will have a better performance, and be able to finaly watch HD movies on my Linux box. By the way, I have tried every posible Linux player: mplayer (even nightly builds), totem, vlc and many more ...
EDIT:
irqbalance --debug
root@radu-work:/# irqbalance --debug
Package 0: cpu mask is 00000001 (workload 0)
Cache domain 0: cpu mask is 00000001 (workload 0)
CPU number 0 (workload 0)
CPU number 0 (workload 0)
Package 0: cpu mask is 00000003 (workload 0)
Cache domain 0: cpu mask is 00000003 (workload 0)
CPU number 0 (workload 0)
CPU number 1 (workload 0)
Interrupt 44 (class ethernet) has workload 7
Interrupt 0 (class timer) has workload 0
Interrupt 16 (class storage) has workload 122
Interrupt 17 (class storage) has workload 29
Interrupt 19 (class storage) has workload 0
Interrupt 45 (class legacy) has workload 2
Interrupt 1 (class legacy) has workload 2
Interrupt 12 (class legacy) has workload 0
-----------------------------------------------------------------------------
IRQ delta is 152640
Rescanning cpu topology
Package 0: cpu mask is 00000001 (workload 0)
Cache domain 0: cpu mask is 00000001 (workload 0)
CPU number 0 (workload 0)
CPU number 0 (workload 0)
Package 0: cpu mask is 00000003 (workload 0)
Cache domain 0: cpu mask is 00000003 (workload 0)
CPU number 0 (workload 0)
CPU number 1 (workload 0)
Package 0: cpu mask is 00000001 (workload 16)
Cache domain 0: cpu mask is 00000001 (workload 16)
CPU number 0 (workload 3)
Interrupt 44 (ethernet/2)
CPU number 0 (workload 0)
Interrupt 17 (storage/9)
Interrupt 19 (storage/0)
Interrupt 45 (legacy/0)
Interrupt 12 (legacy/0)
Package 0: cpu mask is 00000003 (workload 42)
Cache domain 0: cpu mask is 00000003 (workload 42)
CPU number 0 (workload 0)
CPU number 1 (workload 0)
Interrupt 16 (storage/40)
Interrupt 1 (legacy/0) -----------------------------------------------------------------------------
...
-----------------------------------------------------------------------------
IRQ delta is 10
IRQ delta is 10, switching to power mode
Rescanning cpu topology
Package 0: cpu mask is 00000001 (workload 0)
Cache domain 0: cpu mask is 00000001 (workload 0)
CPU number 0 (workload 0)
CPU number 0 (workload 0)
Package 0: cpu mask is 00000003 (workload 0)
Cache domain 0: cpu mask is 00000003 (workload 0)
CPU number 0 (workload 0)
CPU number 1 (workload 0)
Package 0: cpu mask is 00000001 (workload 38)
Cache domain 0: cpu mask is 00000001 (workload 38)
CPU number 0 (workload 36)
Interrupt 44 (ethernet/35)
CPU number 0 (workload 0)
Interrupt 16 (storage/0)
Interrupt 1 (legacy/0)
Package 0: cpu mask is 00000003 (workload 4)
Cache domain 0: cpu mask is 00000003 (workload 4)
CPU number 0 (workload 0)
CPU number 1 (workload 0)
Interrupt 19 (storage/0)
Interrupt 17 (storage/0)
Interrupt 45 (legacy/0)
Interrupt 12 (legacy/0) | change interrupt smp_affinity |
System calls can be interrupted through the use of signals, such as SIGINT (generated by CTRL+C), SIGHUP, etc. You can only interrupt them by interacting with the system calls through a PID, however when using Unix signals and the kill command.
rt_patch & system calls
@Alan asked the following follow-up question:Is the possibility to interrupt system calls directly related with the
acceptance of the rt_patch in the mainline Linux kernel?My response:
I would think so. In researching this I couldn't find a smoking gun that says you could/couldn't do this which leads me to believe that you can.
The other data point which makes me think this, is that the underlying signals mechanism built into Unix is necessary for being able to interact with processes. I don't see how a system with these patches in place would be able to function without the ability to use signals.
Incidentally the signals operate at the process level. There isn't any method/API which I'm aware of for injecting interrupts to system calls directly.
ReferencesWhen and how are system calls interrupted? |
Please comment on the following sentence:On the standard Linux kernel without the rt patch, interrupts can't
interrupt ongoing system calls. The reason why our machine doesn't
stop working when data is fetched from the hard disk is because the
system call we used for that operation is blocking. Blocking means
that once it issues the request to the hard disc it changes the
process state to blocked, and willingly gives up the processor time.
There are no means to interrupt an ongoing system call on a non real time kernel.This is my understanding of the topic, I am however, not sure if it is correct.
| Can system calls be interrupted? |
The following code allows us to use the usual shortcuts Ctrl + X, Ctrl + C, Ctrl + V to cut/copy/paste, to both Zsh's as well as X.org's clipboard.
Ctrl + C functions as interrupt as usual, when we aren't in ZLE. To do this, we insert our hook into Zsh's precmd_functions & preexec_functions, to know when we start editing with ZLE, and when we finish editing it when we press Enter.
In order to set/unset Ctrl + C as the interrupt signal, we use stty.
The front part of the script below defines our copy/cut/paste clipboard functions.
The latter part is a slightly modified code from the excellent answer shell - Zsh zle shift selection - Stack Overflow, which binds keys to these functions.
function zle-clipboard-cut {
if ((REGION_ACTIVE)); then
zle copy-region-as-kill
print -rn -- $CUTBUFFER | xclip -selection clipboard -in
zle kill-region
fi
}
zle -N zle-clipboard-cutfunction zle-clipboard-copy {
if ((REGION_ACTIVE)); then
zle copy-region-as-kill
print -rn -- $CUTBUFFER | xclip -selection clipboard -in
else
# Nothing is selected, so default to the interrupt command
zle send-break
fi
}
zle -N zle-clipboard-copyfunction zle-clipboard-paste {
if ((REGION_ACTIVE)); then
zle kill-region
fi
LBUFFER+="$(xclip -selection clipboard -out)"
}
zle -N zle-clipboard-pastefunction zle-pre-cmd {
# We are now in buffer editing mode. Clear the interrupt combo `Ctrl + C` by setting it to the null character, so it
# can be used as the copy-to-clipboard key instead
stty intr "^@"
}
precmd_functions=("zle-pre-cmd" ${precmd_functions[@]})function zle-pre-exec {
# We are now out of buffer editing mode. Restore the interrupt combo `Ctrl + C`.
stty intr "^C"
}
preexec_functions=("zle-pre-exec" ${preexec_functions[@]})# The `key` column is only used to build a named reference for `zle`
for key kcap seq widget arg (
cx _ $'^X' zle-clipboard-cut _ # `Ctrl + X`
cc _ $'^C' zle-clipboard-copy _ # `Ctrl + C`
cv _ $'^V' zle-clipboard-paste _ # `Ctrl + V`
) {
if [ "${arg}" = "_" ]; then
eval "key-$key() {
zle $widget
}"
else
eval "key-$key() {
zle-$widget $arg \$@
}"
fi
zle -N key-$key
bindkey ${terminfo[$kcap]-$seq} key-$key
} |
I've added some keyboard shortcuts in my Zsh to enable selecting words.
In order for the selection to do something, I would like to use Ctrl + C to copy it. However I would also like to use Ctrl + C to interrupt programs, when the Zsh Line Editor (ZLE) isn't active.
Is this possible and how can I do this? I tried to declare a function TRAPINT to hook Ctrl + C, however when I'm in the ZLE and hit Ctrl + C, this function doesn't appear to be called.
| Copying text with Ctrl + C when the Zsh line editor is active |
softirqs aren't directly related to hardware interrupts, they're the successor to "bottom halves" and the predecessor of tasklets. The (old) Unreliable Guide to Hacking the Linux Kernel has a brief section on the topic; I dare say there are better resources elsewhere. The list of softirqs is defined in include/linux/interrupt.h; you'll see they don't correspond to single hardware interrupts.
Thus you shouldn't subtract /proc/softirq counts from /proc/interrupts. The latter only counts hardware interrupts; these of course may result in softirqs being used too, but there's no easy way of determining the correlation (e.g. between hardware interrupts on your network adapter and NET_RX or NET_TX softirqs).
|
/proc/softirq is softirq stats. Is /proc/interrupt both hard and soft interrupts or hard only?
I want to measure the rate of hard and soft irq's per second roughly using watch -n 1 grep 'foo' /proc/softirq and watch -n 1 grep 'bar' /proc/interrupt so I can compare the rate of hardware interrupt increase to software interrupt.
I'm wondering if I need to subtract /proc/softirq counts from /proc/interrupt to get the count of hardware IRQs because it counts both kinds or if /proc/interrupt is hardware only?
| What is the difference between /proc/interrupts and /proc/softirq in Linux? |
According to kernel documentations:Software Interrupt Context: Softirqs and Tasklets
Whenever a system call is about to return to userspace, or a hardware
interrupt handler exits, any 'software interrupts' which are marked
pending (usually by hardware interrupts) are run (kernel/softirq.c).
Much of the real interrupt handling work is done here. Early in the
transition to SMP, there were only 'bottom halves' (BHs), which didn't
take advantage of multiple CPUs. Shortly after we switched from
wind-up computers made of match-sticks and snot, we abandoned this
limitation and switched to 'softirqs'.
include/linux/interrupt.h lists the different softirqs. A very
important softirq is the timer softirq (include/linux/timer.h): you
can register to have it call functions for you in a given length of
time.
Softirqs are often a pain to deal with, since the same softirq will
run simultaneously on more than one CPU. For this reason, tasklets
(include/linux/interrupt.h) are more often used: they are
dynamically-registrable (meaning you can have as many as you want),
and they also guarantee that any tasklet will only run on one CPU at
any time, although different tasklets can run simultaneously. Caution
The name 'tasklet' is misleading: they have nothing to do with
'tasks', and probably more to do with some bad vodka Alexey Kuznetsov
had at the time.
You can tell you are in a softirq (or tasklet) using the in_softirq()
macro (include/linux/interrupt.h). Caution
Beware that this will return a false positive if a bh lock (see below)
is held. |
Who raises this softirq? Is it raised on every time tick( based on timer interrupt?)
Does this make the kernel to shedule a runnable process? If yes, how does the handlers of lower priority softirqs (HR_TIMER,RCU_SOFTIRQ) run since the execution is now process context(after a schedule ())?
| What is the functionality of SCHED_SOFTIRQ in linux? |
After looking at it for a while, this seems like a "hardware issue" - most likely the physical server hosting the VM is under heavy load from other VMs, and the m1 type machines do not get guaranteed performance, so you get whatever CPU time is left on the host. Worse, if your VM is running on CPU core 0, you also get all the interrupt handling on the machine. See this Xen wiki article under "HVM VM's first (and possibly only) VCPU is fully utilised.". Also see this AWS support thread for other people with a similar interrupt problem.
The only solution at this point is to get your VM migrated to another physical host. This can sometimes be done very simply by stopping the VMs, waiting a few minutes and starting again - if EC2 has assigned another VM to your old host in the mean time, then you can get a different slot and the problem may be solved.
If that doesn't work, the best bet is to create a new image from your VM, and start it on another availability zone. This will cause IPs to be replaced and may require updating security groups and firewall rules. Also make sure you stop your VM before you create the image - to make sure that after the image is created and before the new VM is started, your old VM will not generate new data.
|
I have an m1.small EC2 instance that is mostly just running Apache as a web server for several simple PHP web sites (that use RDS as a database). The server is constantly with a very high load average - around 8, and never below 5. This causes my web sites to be annoyingly slow, much more then I'd expect from a supposedly 2.6GHz CPU with 1.7GB RAM.
top and friends shows that other then ~50% "steal" time (AFAIU, time stolen by the hypervisor for other VMs on the same CPU), the rest is almost entirely "IRQ" time.
mpstat says that this time is spend on a very stable 191.59 interrupts/sec (i.e. that number almost doesn't change over time), and according to '/proc/interrupts' these are mostly spend on xen-percpu-virq timer0 and xen-dyn-event eth0.
What are these, and how can I get the EC2 instance to lower the load and spend some more time on my PHP sites?
| Why EC2 instance spends all its time in "IRQ" and what to do about it? |
This is highly platform-specific. Unless you bind to a certain platform (even difference between x86-32 and x86-64 is principal), one can't answer this. But, if to limit it to x86, according to your last comment, I could suggest some information.
There are two main styles of service request ("syscall") from user land to kernel land: interrupt-styled and sysenter-styled. (These terms are invented by me for this description.) Interrupt-styled requests are those that handled by processor exactly in the same manner as an external interrupt. In x86 protected mode, this is called using int 0x80 (newer) or lcall 7,0 (the oldest variant, SysV-compatible) and implemented using so-called gates (task gate, interrupt gate, etc.), configured as special segment descriptors. The task switching is executed by processor. During this switching, the old task registers, including stack pointer, are stored to old task TSS, and the new task registers, including stack pointer, are loaded from the new task TSS. In other words, all "usual" registers are stored and loaded (so this is very long action). (There is a separate issue with FPU/SSE/etc. state which change is postponed - see documentation for details.)
For handling such service requests, kernel prepares a separate stack for each thread (a.k.a. LWP - lightweight process), because a thread can be switched during any blockable function call. Such stack usually has small size (for example, 4KB).
As soon as x86 task switching always changes stack pointer, there is no chance to reuse userland stack for kernel. On the other hand, such reuse shall not be allowed at all (except small amount of the current thread data) because a user process page can be unsecure: another active thread can change or even unmap it. That's why it is simply prohibited to use userland stack for running in kernel, so, each thread shall have different stacks for its user and kernel land; this remains true for modern, sysenter-styled processing. (On the other hand, as already noted above, each thread shall have a stack for its kernel land another than of another thread.)
Sysenter-styled processing had been designed much later and implemented with SYSENTER and SYSCALL processor instructions. They differ in that they were not designed with keeping an old (too firm) restriction in mind, that system call shall keep all registers. Instead, they were designed more closer to a usual function call ABI which allows that a function can arbitrarily change some registers (in most ABIs, this is named "scratch" registers), only a few registers are changed and the care to keep old values is brought by handler routines. SYSENTER/SYSEXIT instruction pair (both for 32 and 64 bits) spoil old contents of RDX and RCX (in a weird manner - userland shall prefill them with proper values), and new RIP and RSP are loaded from respective MSRs, so, stack is switched to the kernel land one immediately. Opposed to this, SYSCALL/SYSRET (64 bit only) use RCX and R11 for return address and flags, and do not change stack by themselves. Later on, kernel utilizes part of this stack to save a few registers and then switches to own stack, because 1) there is no guarantee that userland stack is enough big to keep all needed values, and 2) for security reasons (see above). From this point, we have again a per-thread kernel stack.
Beside userland threads, there are many kernel-only threads (you can see them in ps output as names inside square brackets). Each such thread has its own stack. They implement 1) periodic routines, started on some event or timeout, 2) transient actions or 3) handle actions requested from real interrupt handlers. (For case 3 they named "bh" in old kernels, and "ksoftirqd" in newer ones.) Large part of these threads are attached to a single logical CPU. As soon as they have no user land, they have no user land stack.
External interrupt handlers are limited in Linux, AFAIK, to no more than one simultaneously executed handler for each logical CPU; during such handler execution, no IO interrupts are allowed. (NMIs are a terrible exception with bug-prone handling.) They come using task-switching interrupt gate and have got an own stack for each logical CPU, for the same reasons as described above.
As already noted, the most part of this is too x86-specific. Task switching with mandatory stack pointer replacing is rare to see at another architectures. For example, ARM32 has a single stack pointer per privilege level, so, if an external interrupt comes during kernel land, stack pointer is not changed.
Some details in this answer can be obsoleted due to high kernel development speed. Consider it only as a general suggestion and verify against the concrete version you will explore. For more description on x86 interrupt handling and task switching, please refer to "Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3A: System Programming Guide, Part 1" (freely available on Intel website).
|
What are the main stacks in Linux? What I mean is, for example when an interrupt occurs what stack will be used for it, and what is the difference between user process and kernel process stacks?
| Main stacks in Linux |
trap break INT before your loop:
hashes=
trap break INT
for file in *; do
hashes+=$(md5sum -- "$file")$'\n'
done
trap - INTecho "$hashes" > hashes.txtI've corrected some dubious stuff in your script ($(ls), "\n").
FWIW, such a "nonlocal break" doesn't work in mksh or yash, but it does in bash, dash, zsh and ksh93.
|
I have a lot of files in a directory:
$ ls
file000001
file000002
# ... truncated ...
file999999I am calculating the md5sum of the files like this and finally dumping it to a file:
hashes=''
for file in $(ls); do
hashes+=$(md5sum $file)
hashes+="\n"
doneecho "$hashes" > hashes.txtNow, I would like to press Ctrl + C while the execution of the script is within the for loop and have the contents of hashes dumped to the hashes.txt file. Is this possible?
(Yes, I can append the md5sum to hahes.txt every time the md5sum of each file is calculated but I intend to do it this way (as shown above).)
Note that the example code above is terrible. I actually used md5sum as an example; I am doing some other stuff. The intention of my question is actually to find out how to make Ctrl + C work.
| Dump contents of variable to file while script is running in bash |
You are correct: they relate to the IO-APIC system. ERR is documented in the kernel documentation in Documentation/filesystems/proc (lines 677-680):ERR is incremented in the case of errors in the IO-APIC bus (the bus
that connects the CPUs in a SMP system. This means that an error has
been detected, the IO-APIC automatically retry the transmission, so it
should not be a big problem, but you should read the SMP-FAQ.AFAICT you shouldn't see this unless there is a hardware issue. As the documentation indicates, it's something that you show note and investigate if it happens frequently.
MIS doesn't show up in the documentation, but this Gentoo forum message from 2005 talks about it. The current arch/x86/apic/io_apic.c (lines 1797-1806) has the following comment:It appears there is an erratum which affects at least version 0x11 of
I/O APIC (that's the 82093AA and cores integrated into various
chipsets). Under certain conditions a level-triggered interrupt is
erroneously delivered as edge-triggered one but the respective IRR bit
gets set nevertheless. As a result the I/O unit expects an EOI
message but it will never arrive and further interrupts are blocked
from the source. The exact reason is so far unknown, but the
phenomenon was observed when two consecutive interrupt requests from a
given source get delivered to the same CPU and the source is
temporarily disabled in between.As this comment (and code) haven't significantly changed in over 10 years (other than kernel restructuring), I'm not sure how relevant it is today, but it is very small and protects against a strange hardware quirk.
The files that I looked at were from version 4.15.10 of the kernel. Your sources may vary.
|
Playing aroud looking at /proc/interrupts The output below shows ERR and MIS on lines 26 and 27 respectively. What are these and why do they have counts (albeit of zero) for CPU0 but no others, as well as no description? Am I right to think they're actually to do with if the PIC errors itself?
Thanks ErikF for the reply. Why do these interrupts only appear for CPU0? Is it because only that CPU will receive an interrupt if there is an error with the PIC/Interrupt System?
1. username@domain:/proc$ cat interrupts
2. CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
3. 0: 1221738 0 0 0 0 0 0 0 IO-APIC 2-edge timer
4. 1: 9 0 0 0 0 0 0 0 IO-APIC 1-edge i8042
5. 6: 3 0 0 0 0 0 0 0 IO-APIC 6-edge floppy
6. 8: 0 0 0 0 0 0 0 0 IO-APIC 8-edge rtc0
7. 9: 0 0 0 0 0 0 0 0 IO-APIC 9-fasteoi acpi
8. 12: 169 0 0 0 0 0 0 0 IO-APIC 12-edge i8042
9. 14: 0 0 0 0 0 0 0 0 IO-APIC 14-edge ata_piix
10. 15: 96 65508 0 0 0 0 0 0 IO-APIC 15-edge ata_piix
11. NMI: 0 0 0 0 0 0 0 0 Non-maskable interrupts
12. LOC: 402 123 273 78 134 110 118 110 Local timer interrupts
13. SPU: 0 0 0 0 0 0 0 0 Spurious interrupts
14. PMI: 0 0 0 0 0 0 0 0 Performance monitoring interrupts
15. IWI: 95 83 81 94 90 97 86 76 IRQ work interrupts
16. RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries
17. RES: 2769117 3625540 1918695 3115064 2249434 2089381 1783180 2173439 Rescheduling interrupts
18. CAL: 3468 22419 21729 15320 20704 31602 15100 18188 Function call interrupts
19. TLB: 11579 12003 12034 10741 10855 11647 9593 11018 TLB shootdowns
20. TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts
21. THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts
22. DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts
23. MCE: 0 0 0 0 0 0 0 0 Machine check exceptions
24. MCP: 224 224 224 224 224 224 224 224 Machine check polls
25. HYP: 2620495 2791215 12310023 2806541 2615199 1920111 2463082 2627540 Hypervisor callback interrupts
26. ERR: 0
27. MIS: 0
28. PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event
29. PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event | Regarding /proc/interrupts what are MIS and ERR? |
I haven’t looked into the historical context behind the use of a single interrupt (0x80) for system calls on i386, there are a few reasons not to use separate software interrupts for individual system calls.The number of software interrupts is limited, which constrains the number of system calls (and on x86, a number of interrupt descriptor table entries need to be used for other purposes). There are now more systems calls than could be supported on most architectures using software interrupts.
Quite a few architectures have dedicated system call instructions, which don’t use software interrupts (or equivalent thereof). This includes x86-64, where the dedicated SYSCALL instruction provides faster system call access than software interrupts. man 2 syscall provides details of each architecture’s system call conventions.If you look at the details of the ARM/OABI architecture you might get the impression that an interrupt table was used there, since the system call number is encoded in the instruction; but the corresponding instruction executes a fixed software interrupt, ignoring the encoded number, and the system call handler retrieves the number from the instruction itself. This approach was abandoned for EABI because it led to cache pollution (more recent ARM CPUs have separate instruction and data caches).
|
Why does a system call table exist and not just appended to the interrupt vector table? I don't understand the design choice here. If it improves performance to differentiate events, why not system calls then?
| Why does a system call table exist and not just appended to the interrupt vector table? |
On Linux, yes, it is always safe to restart a system call which returned with EINTR: that return value means that the system call was interrupted before it made any useful progress, and should be restarted. The system call’s implementation takes this into account.
Cases where the system state has changed because of the interrupted system call are handled differently; for example, a read call which retrieved some data before it was interrupted will return that data, indicating success, and a write call which transferred some data before it was interrupted will return the amount of data it wrote, also indicating success. (Incidentally, this is one of the reasons it’s essential to check the return values of these functions and not assume that successful calls did all the work requested.)
Many system calls can be automatically restarted, by setting the SA_RESTART flag for the appropriate signals. The GNU C library provides a macro which can help write restarting code, TEMP_FAILURE_RETRY in unistd.h (defined if _GNU_SOURCE is defined).
Note that on Linux, system calls can return EINTR even without a signal handler.
The “Interruption of system calls and library functions by signal handlers” section of man 7 signal has all the details, including lists of affected system calls in the various scenarios.
|
I was reading a textbook which describes how to deal with system call when interrupted:System calls can be interrupted. System calls such as read, wait, and accept that can potentially block the process for a long period of time are called slow system calls. On some older versions of Unix, slow system calls that are interrupted when a handler catches a signal do not resume when the signal handler returns but instead return immediately to the user with an error
condition and errno set to EINTR. On these systems, programmers must include code that manually restarts interrupted system calls.But is it always safe to restart system calls?
Let's say a system call maintains an internal data structure that needs to be reset before the system call finishes. So we start the system call and it is long running and blocked, when a signal interrupts it, the system call just restarts, so the first system call doesn't have a chance to reset the data structure.
Since the data structure in the previous call wasn't get reset, after the second system call occurs, the data structure is not consistent, which might pollute the operation.
So is it safe to restart system calls?
| Is it safe to restart system calls? |
Time-sliced threads are threads executed by a single CPU core without truly executing them at the same time (by switching between threads over and over again).
This is the opposite of simultaneous multithreading, when multiple CPU cores execute many threads.
Interrupts interrupt thread execution no matter of technology, and when interrupt handling code exits, control is given back to thread code.
|
What does it mean when threads are time-sliced? Does that mean they work as interrupts, don't exit while routine is not finished? Or it executes one instruction from one thread then one instruction from second thread and so on?
| Threads vs interrupts |
Put
trap ":" SIGINTbefore the loop. This makes the shell ignore the signal. But just in the sense that it executes a dummy command, not "ignore" in the signal handler sense.
Because the shell process does not block this signal (from the kernel perspective) it gets through to its child processes (like tail in this example).
|
I have a bunch of log files and I want to do a tail -f on them in a loop such that when I press Ctrl-C, the current tail -f gets killed and I proceed to the next log file:
for log in *.log; do
printf '%s\n' "Tailing log '$log'; press Ctrl-C to skip to the next"
tail -f "$log"
doneThe issue is that pressing Ctrl-C kills the loop itself. How can I restrict the interrupt signal to just the child process, tail in this case?
| How to restrict interrupt signal to just the child process? |
As you can read from some code
#if defined (HAVE_GETRUSAGE) && defined (HAVE_TIMEVAL) && defined (RUSAGE_SELF)
...
getrusage (RUSAGE_SELF, &self);
...
print_timeval (stdout, &self.ru_utime);
...
#else
# if defined (HAVE_TIMES)
...
times (&t);
print_clock_t (stdout, t.tms_utime);
...Depending on HAVE_GETRUSAGE, HAVE_TIMEVAL and RUSAGE_SELF settings, bash's time internal command will either use the getrusage system call or the times system call.
In case getrusage is used, then bash's time command will use the ru_stime and ru_utime members of the rusage structure and compute the members of their associated timeval structure with, according to the manual page linked herabove, a microsecond accuracy.
In case the times system call is used then bash's time command will output the members of the associated clock_t structure.
In this precise case, values are indeed reported in clock ticks and you are therefore correct writing that the precision of the values reported just cannot be better than 1/CONFIG_HZ.
As you might read in the times system call manual page, its use is discouraged for many reasons. So I believe the settings in your system (HAVE_GETRUSAGE…) are appropriate to enable bash not to have to resort to it.
If not then… you are correct, the precision you report is just… absurd.A/ Why getrusage can report timings with a better precision than 1/CONFIG_HZ :
Because :Since Linux 2.6.21, Linux supports high-resolution timers (HRTs), On a
system that supports HRTs, the accuracy of sleep and timer system
calls is no longer constrained by the jiffy, but instead can be as
accurate as the hardware allows (microsecond accuracy is typical of
modern hardware).As you can read in this overview of time and timers.B/ How can whatever sort of accuracy regarding utime/stime be achieved if no timer is set right at the beginning of each system call and read right after it completes :
Precise cputime : Who cares ? The scheduler! The scheduler in its Completely Fair sharing attributions not to forget its obligations to take care of tasks willing to… nanosleep…
What cputime ? : Grand total of course ! That is to say u_time + s_time since (apart from some very rare exceptions) the scheduler does not care a damn how (in which context) tasks squander their time slices.
So who cares for the s/u time ratios ? : cpu accounting afficionados!
B.1/ So… distinguishing u_time and s_time ?… let's… assume!
In the (good ?) old times of Linux (understand < about 2.6.? kernels) cpu accounting was some sort of simple(istic?) heuristic :
When some task is scheduled out, the scheduler takes note of the context in which the task was running and… :
cpu accounting will just be happy assuming that : all the time the task has been running since it was scheduled in was spent in the context it was running when scheduled out !
From that, one can understand that debating on the accuracy of the user and system cpu time separate reports because of whatever clock precision is somehow meaningless : There is just an error in the determination of what is what. An error that will just be cumulated over time. Only chance, added with a good number of times the task went scheduled in/out in its lifetime can help the cpu time accounting afficionado expecting that errors in + will compensate errors in - and therefore whatever meaningfulness of these values (as taken separately since their sum remains indeed and accurate and meaningful)
B.2/ Error OK but… not THAT much ! So… let's go… virtual!
On systems with virtual CPUs, when using more virtual CPUs than real CPUs on the virtual platform, the real CPUs might spend part of their time servicing another virtual processor while the time slice might be accounted to some process which actually could not utilize it. This leading to not only erroneous but completely unrealistic s_time figures.
Long story made short, around about 2.6.? kernels, linux came with the Virtual CPU time accounting feature providing, among many other things, CPU times accounting, whenever the execution context changes and consequently enabling definitely and precise and accurate u-times and s-times.
As anywhere, there is no free lunch… (per cpu) timers everywhere… expect some significant overhead.
Nevermind, many top rated distro (RHEL, SUZE…) quickly made this virtual cpu time accounting default.
Nevermind (me personally) thanks to some kernel tunable ( CONFIG_TICK_CPU_ACCOUNTING ) offered since 3.7, (and AFAIK still default on stock Linux versions) I can still opt for the (good ?) old cheep cpu accounting method since… I (personally) only care about the summ u_time + s_time.
That'll be all folks ;-)
|
From the bash time command on different stock Ubuntu systems (both real hardware and VMs) all with CONFIG_HZ=250, I'm sometimes getting real 0m0.001s, user 0m0.001s or sys 0m0.001s as well as any other number of milliseconds.
How is this possible ?
Edit: I admit the elapsed (real) time can be computed exactly by querying any of the available high resolution time sources at the beginning and the end of the time command.
But because there is no time source query on system call entry or exit for performance reasons
I expected the accounting of user and sys CPU times to be in timer ticks (number of timer interrupts) which would be multiples of 4 ms.
2nd edit: Taking MC68020's answer into account, the question becomes: How is the Linux kernel able to return user and sys CPU usages with microsecond accuracy from the getrusage syscall ?
Did I do wrong when arguing there cannot be a time source query on each system call entry and exit for performance reasons ?
| How can the time command compute one millisecond from 4 ms timer ticks? |
Is it safe to allocate 414 bytes in the .bss section and point RSP to the top?Since you’re controlling all the content of your executable, and presumably not linking to any libraries, this should be fine. Notably, the MAP_STACK mmap flag currently has no effect, any readable and writable page can be used for the stack.at least some part of the kernel code operates in the calling executable contextYes, system calls operate inside the calling process, but ...Will it smash my stack?... the kernel operates on its own stacks — otherwise userspace could change values inside the kernel during system call execution! It doesn’t touch the userspace stacks, although some system calls do care about the stack (clone in particular).Also, interrupts can happen at any point in the program, and the story behind the "Red Zone" seems to suggest that an arbitrarily large region beyond RSP-128 can be written to at will by interrupt handlers, possibly mangling my data. What kinds of guarantees to I have about this behavior?Hardware interrupts also use their own stacks, so you’re safe there too. To protect yourself from signal handlers, you can set up a dedicated stack using sigaltstack.
|
(This is in the context of x86-64 Linux.)
I am trying to write a high-reliability userland executable, and I have total control over the generated assembly. I don't want to rely on automatic stack allocation, so I would like to put the stack in a known location. Suppose I have calculated that my program uses at most 414 bytes of stack space (exactly). Is it safe to allocate 414 bytes in the .bss section and point RSP to the top? I want to ensure that no bytes outside this region are touched by stack management at any point.
While I can be sure that my program won't write outside the region, I need to make some syscalls (using the syscall instruction), and I think at least some part of the kernel code operates in the calling executable context. Will it smash my stack?
Also, interrupts can happen at any point in the program, and the story behind the "Red Zone" seems to suggest that an arbitrarily large region beyond RSP-128 can be written to at will by interrupt handlers, possibly mangling my data. What kinds of guarantees do I have about this behavior?
| Is it safe to use the .bss section as a static stack? |
Looking at the man page for lsdev there is this comment:This program only shows the kernel's idea of what hardware is present, not what's actually physically available.The output of lsdev is actually just the contents of the /proc/interrupts file:
excerpt from man proc
/proc/interrupts
This is used to record the number of interrupts per CPU per IO
device. Since Linux 2.6.24, for the i386 and x86_64 architectures,
at least, this also includes interrupts internal to the system (that
is, not associated with a device as such), such as NMI (non‐
maskable interrupt), LOC (local timer interrupt), and for SMP
systems, TLB (TLB flush interrupt), RES (rescheduling interrupt),
CAL (remote function call interrupt), and possibly others. Very
easy to read formatting, done in ASCII.So I'd likely go off of the contents of /proc/interrupts instead:
$ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 157 0 0 0 IO-APIC-edge timer
1: 114046 13823 22163 22418 IO-APIC-edge i8042
8: 0 0 0 1 IO-APIC-edge rtc0
9: 863103 151734 155913 156348 IO-APIC-fasteoi acpi
12: 2401994 396391 512623 477252 IO-APIC-edge i8042
16: 555 593 598 626 IO-APIC-fasteoi mmc0
19: 127 31 83 71 IO-APIC-fasteoi ehci_hcd:usb2, firewire_ohci, ips
23: 32 8 21 16 IO-APIC-fasteoi ehci_hcd:usb1, i801_smbus
40: 5467 4735 1518263 1230227 PCI-MSI-edge ahci
41: 1206772 1363618 2193180 1477903 PCI-MSI-edge i915
42: 267 5142231 817 590 PCI-MSI-edge iwlwifi
43: 5 8 6 4 PCI-MSI-edge mei_me
44: 0 2 2 23405 PCI-MSI-edge em1
45: 19 66 39 23 PCI-MSI-edge snd_hda_intel
NMI: 12126 25353 28874 26600 Non-maskable interrupts
LOC: 29927091 27300830 30247245 26674337 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 12126 25353 28874 26600 Performance monitoring interrupts
IWI: 634179 806528 600811 632305 IRQ work interrupts
RTR: 5 1 1 0 APIC ICR read retries
RES: 4083290 3763061 3806592 3539082 Rescheduling interrupts
CAL: 16375 624 25561 737 Function call interrupts
TLB: 806653 778539 828520 806776 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 416 416 416 416 Machine check polls
ERR: 0
MIS: 0ReferencesLinux list all IROs currently in use
Kernel Korner - Dynamic Interrupt Request Allocation for Device Drivers |
If we add a device that does not support PNP (Plug-an Play), the manufacturer will hopefully provide explicit directions on how to assign IRQ values for it.
However, if we don't know what IRQ value to specify, what command line should be used to check if a IRQ value is free or not?
lsdev displays info about devices:
$lsdev
Device DMA IRQ I/O Ports
------------------------------------------------
0000:00:02.0 7000-703f
0000:00:1f.2 7060-707f 7080-7087 7088-708f 7090-7093 7094-7097
0000:00:1f.3 efa0-efbf
0000:01:00.0 6000-607f
0000:04:00.0 4000-40ff
0000:05:00.0 3000-30ff
acpi 9
ACPI 1800-1803 1804-1805 1808-180b 1810-1815 1820-182f 1850-1850
ahci 43 7060-707f 7080-7087 7088-708f 7090-7093 7094-7097
cascade 4 What about this cmd lsdev, is it enough for this task? For example, if we want to know if 1233 is free, we would run this command:
lsdev | awk '{print $3}'|grep 1233 NOTE: $3 above is used because IRQ value printed in the 3rd column of lsdev output.
Then if no output, it means that it is free for us to use?
| How to know if an IRQ value is free to use |
The signals are sent in the order that you type them via the terminal to the kernel. If you use Ctrl+C you're instructing the kernel to send the signal SIGINT, to the foreground process group. Upon receiving this, the command that was running will be terminated.
With a Ctrl+Z you're sending the signal SIGSTP. Which doesn't actually kill the process, just tells it to stop, temporarily. When this is used you can resume the process, by telling the shell to bring it back to the foreground, via the fg command, or you can background it with the bg command.
If a job has been stopped via the SIGSTP signal, then you can truly kill it with the kill command like so:
$ kill %1Where %1 is the job id of the job you just SIGSTP'd.
Checking on stopped jobs
You can use the job command to see what job's have been stopped in a shell like so:
$ sleep 1000
^Z
[1]+ Stopped sleep 1000
$ jobs
[1]+ Stopped sleep 1000Here I've used Ctrl+Z to stop my sleep 1000 command. In the output the [1] corresponds to the %1 that I mentioned above. Killing it like so would have the same effect as the Ctrl+C.
$ kill %1
[1]+ Terminated sleep 1000The fg and bg command that I mentioned above would act on the job that has the plus sign, + after its number. Notice it here, in jobs output:
[1]+ Stopped sleep 1000It's more obvious if I have a couple of jobs, for example:
$ jobs
[1] Stopped sleep 1000
[2]- Stopped sleep 2000
[3]+ Stopped sleep 3000So any bare fg or bg command will act on the job with the +. I can target a specific one like so:
$ fg %1
sleep 1000 |
People often hit Ctrl + c or Ctrl + z to cancel or abandon the job if the process gets sluggish, in this case which of these signals get processed, the first one or the last one? is each signal processed? if not, then which ones are ignored?
| What happens to the signals requested recursively? |
SIGKILL will terminate a process by default and cannot be blocked, whereas SIGINT will terminate a process by default but can be blocked. Since you moved the process to the background with CtrlZ (SIGTSTP), the SIGINT signal will be blocked and will only be delivered to the process after it resumes its execution (SIGCONT).
|
According to the POSIX standard regarding signals (see "Standard signals" section), both SIGKILL and SIGINT have as default action Term. However, this seems not to be the case when the receiving process has been stopped via SIGSTOP (or SIGTSTP).
For the sake of simplicity, let's say I create a program with an infinite loop:
int main(){
while(1);
}compile and run it
$ ./programThen I press CtrlZ to stop it. This causes a SIGTSTP signal to be delivered to this process. Now, if I send a SIGINT signal to the process via kill -SIGINT pid, and look at the process table: ps -aux, I see that it has not been terminated. However, if I send a kill -SIGKILL pid it does get terminated.
Why does SIGINT not behave in the same way as SIGKILL, if they both have the same default action (i.e. Term)?
| Why does SIGINT not terminate a stopped process? |
Chapter 6 of "Linux Kernel Development" by Robert Love explains it, as do these free web resources:Linux Kernel Module Development Guide
linuxdriver.co.il
Linux Device DriversBasically, the top half's job is to run, store any state needed, arrange for the bottom half to be called, then return as quickly as possible. The bottom half does most of the work.
|
I would like to know more about Top half and Bottom Half processing in the Context on Interrupts. Could someone explain me the exact things happening in both scenarios.
| What happens in the Top half and Bottom Half processing of Interrupts? |
Depending on what you've written and what data structures it uses, it's hard to say, but:I read that interrupts can't sleep, does that mean I am guaranteed that my handlers (hooks and read handlers) will be executed one after the other, or do I need to use locks to prevent simultaneous access to the same resources from different functions?While it's true that interrupts aren't allowed to sleep, you also have to consider than an interrupt interfacing with this datastructure can also simultaneously run on another CPU, or another interrupt might stack on top of your your current interrupt being acted on, taking it temporarily off the CPU. In either case, you need to handle the deadlocking case, and the case that two threads compete for writes/reads.
So yes, there's no reason to believe just based on what you've written that you don't need a synchronisation mechanism of some kind. Depending on your particular case, you might find synchronisation simpler if you disable further interrupts on that CPU (eg. in the case of percpu variables).
What the appropriate mechanism is will depend on what you're guarding access to and how lengthy and costly that is likely to be, although since you are executing an interrupt, you're somewhat limited in that you can only really choose non-blocking primitives.
|
I have a kernel module with netfilter hooks at different points in the packet route, and the hooks use shared resources. In addition, the module has a char device that may be written to, that also affects these resources.
I am not sure if I need to use locks when different handlers access these resources. I read that interrupts can't sleep, does that mean I am guaranteed that my handlers (hooks and read handlers) will be executed one after the other, or do I need to use locks to prevent simultaneous access to the same resources from different functions?
thanks.
| Resource locking in interrupts |
I wrote a comprehensive blog post about Linux network tuning which explains everything about monitoring, tuning, and optimizing the Linux network stack (including the NAPI weight). Take a look.
Keep in mind: some drivers do not disable IRQs from the NIC when NAPI starts. They are supposed to, but some simply do not. You can verify this by examining the hard IRQ handler in the driver to see if hard IRQs are being disabled.
Note that hard IRQs are re-enabled in some cases as mentioned in the blog post and below.
As far as your questions:Increasing netdev_budget increases the number of packets that the NET_RX softirq can process. The number of packaets that can be processed is also limited by a time limit, which is not tunable. This is to prevent the NET_RX softirq from eating 100% of CPU usage. If the device does not receive enough packets to process during its time allocation, hardirqs are reneabled and NAPI is disabled.You can also try modifying your IRQ coalescing settings for the NIC, if it is supported. See the blog post above for more information on how to do this and what this means, exactly.You should add monitoring to your /proc/net/softnet_stat file. The fields in this file can help you figure out how many packets are being processed, whether you are running out of time, etc.A question for you to consider, if I may:
Why does your hardirq rate matter? It probably doesn't matter, directly. The hardirq handler in your NIC driver should do as little work as possible, so it executing a lot is probably not a problem for your system. If it is, you should carefully measure that as it seems very unlikely. Nevertheless, you can adjust IRQ coalescing settings and IRQ CPU affinity to distribute processing to alter the number of hardirqs generated by the NIC and processed by a particular CPU, respectively.
You should consider whether you probably are more interested in packet processing throughput or packet processing latency. Depending on which is the concern, you can tune your network stack appropriately.
Remember: to completely tune and optimize your Linux networking stack, you have to monitor and tune each component. They are all intertwined and it is difficult (and often incorrect) to monitor and tune just a single aspect of the stack.
|
I am trying to test the NAPI functionalities on embedded linux environment. I used 'pktgen' to generate the large number of packets and tried to verify the interrupt count of my network interface at /proc/interrupts.
I found out that interrupt count is comparatively less than the packets generated. Also I am trying to tune the 'netdev_budget' value from 1 to 1000(default is 300) so that I can observe the reduction in interrupt count when netdev_budget is increased.
However increasing the netdev_budget doesn't seems to help. The interrupt is similar to that of interrupt count observed with netdev_budget set to 300.
So here are my queries:What is the effect of 'netdev_budget' on NAPI?
What other parameters I can/should tune to observe the changes in interrupt count?
Is there any other way I can use to test the NAPI functionality on Linux?(Apart from directly looking at the network driver code)Any help is much appreciated.
Thanks in advance.
| How to test linux NAPI feature? |
Ctrl+Z is actually a feature of the generic terminal interface in the kernel, not of bash. It causes a SIGTSTP signal to be sent to the foreground process. Likewise Ctrl+C sends SIGTERM and Ctrl+\ sends SIGQUIT.
There are two ways in which a program can cause Ctrl+Z to lose its effect.The program can ignore the SIGTSTP signal.
You can check the a process's signal behavior with a debugger. On Linux, the information is available via /proc: grep Sig /proc/1234/status where 1234 is the process ID shows which signals are ignored (SigIgn, they'll just bounced off harmlessly) or blocked (SigBlk, they're put in wait until the program lets them in). The number is a bitmask and written out in hexadecimal. SIGTSTP is signal 20 (run kill -l in bash) so it is ignored if on the SigIgn line, the fifth digit from the right is 8 or above.
The program can change the key bindings.
You can check the current key bindings with a command line like stty -a </dev/pts/42 where /dev/pts/42 is the terminal where the process is running. Look for susp = ^Z.A daemon is likely to ignore most signals. Launch it in the background, if it doesn't fork by itself (most daemons actually fork a child as soon as they start and let the parent immediately). If you launched it in the foreground, there are many ways to recover (send it or its parent shell a signal) but you'll need another shell for that.
|
If I run mysqld from the command line, it displays some startup messages, and then stops responding. It doesn't produce output, and ignores any input, and there seems to be no way to get rid of it. ctrl+z and ctrl+c don't do anything, there is no way to get back to a terminal - I have to start a new session. This means, for instance, if I am connecting to a server via ssh I have to initiate a new ssh connection.
This is very annoying. Is there any way in bash to tell it to deactivate the currently running process? Any other program I can suspend with ctrl+z, but not mysqld, though I don't know why. If I start it with mysqld & then it runs in the background and I can send signals to it normally, as long as I don't make the mistake of typing fg, and it otherwise acts like an ordinary process, so I don't know why ctrl+z has no effect on it, since I thought that was part of bash rather than being under the control of the process.
So: Is there an alternative to ctrl+z, for when ctrl+z does not work?
| How to interrupt uninterruptible program? |
xhci stands for eXtensible Host Controller Interface which is standard for USB 3.0 "SuperSpeed" host controller hardware.
irq/21-xhci-hcd is likely to represent the irq associated to one particular usb bus. (Which can host several different usb devices)
lsusb -t utility should give you more information (and lsusb -vt even more) regarding the individual devices, the bus they are connected to as well as the drivers involved (one for the bus and another for the device itself) in particular.
The problem you notice could be the fact of one particular very busy or ill-functionning usb device. You should be able to identify it (thanks to the lsusb utility) and physically remove it (if possible) and re-test.
Time spent servicing software interrupts are likely to be but not necessarily related. You could check if they evolve in the same order of magnitude viewing /proc/stat watching in particular the "tasklet" soft-irq periodically.
Regarding the latter while the info is reported as part of /proc/stat, one might find easier to watch /proc/softirqs (because of the explicit labels)
|
I have irq/21-xhci-hcd displayed as the process consuming 90% CPU on top. There was also a lot of CPU time spent on servicing software interrupts (si).
This is on embedded Linux.
Does this mean that it's IRQ 21? Can I then use lspci -vvv to get more info on IRQ 21?
If that's not the case, do I need to use other methods like doing dmesg or watch -n1 -d "cat /proc/interrupts"?
What's the best way to get more info on this, including which kernel module is affected? Which kernel thread and function is responsible?
| How to get more details on irq process in top? |
Job control refers to the protocol for allowing a user to move between multiple process groups (or jobs) within a single login session.https://www.gnu.org/software/libc/manual/html_node/Job-Control.html
Generally it's enabled in interactive shells, and disabled in non-interactive ones:
$ echo $-; sleep 1 & fg
himBHs
[1] 84366
sleep 1$ bash -c 'echo $-; sleep 1 & fg'
hBc
bash: line 1: fg: no job controlIn this case... apparently job control is disabled, and $- can't be relied upon:
$ (echo $-; sleep 1 & fg)
himBHs
bash: fg: no job controlThe shell associates a job with each pipeline.https://www.gnu.org/software/bash/manual/html_node/Job-Control-Basics.html
That is, when job control is enabled each pipeline is executed in a separate process group.
pgid.sh:
#!/usr/bin/env bash
ps -o pgid= $$$ ./pgid.sh >&2 | ./pgid.sh >&2; ./pgid.sh; ./pgid.sh & wait
93439
93439
93443
[1] 93445
93445
[1]+ Done ./a.sh$ (./pgid.sh >&2 | ./pgid.sh >&2; ./pgid.sh; ./pgid.sh & wait)
93749
93749
93749
93749One of the jobs is a foreground job, the rest are background ones.
Background jobs are not supposed to be tied to the shell that started them. If you exit a shell, they will continue running. As such they shouldn't be interrupted by SIGINT, not by default. When job control is enabled, that is fulfilled automatically, since background jobs are running in separate process groups. When job control is disabled, bash makes the asynchronous commands ignore SIGINT, and doesn't let them (if they're bash scripts) override it.
That is, here:
$ bash -c "trap 'echo INT' INT; sleep 3" & pid=$!; sleep 1; kill -INT "$pid"; waitthe background job (bash -c "trap 'echo INT' INT; sleep 3") is executed by an interactive shell, which has job control enabled. As a result the background job receives SIGINT.
When we wrap it into a non-interactive shell without job control:
$ (bash -c "trap 'echo INT' INT; sleep 3" & pid=$!; sleep 1; kill -INT "$pid"; wait)bash -c "trap 'echo INT' INT; sleep 3" ignores SIGINT, and trap ... INT is also ignored.
This can be confirmed this way:
$ bash -c "trap 'echo INT' INT; trap; sleep 3" & pid=$!; sleep 1; kill -INT "$pid"; wait
[1] 293631
trap -- 'echo INT' SIGINT
trap -- '' SIGFPE
INT
[1]+ Done bash -c "trap 'echo INT' INT; trap; sleep 3"$ (bash -c "trap 'echo INT' INT; trap; sleep 3" & pid=$!; sleep 1; kill -INT "$pid"; wait)
trap -- '' SIGINT
trap -- '' SIGQUIT
trap -- '' SIGFPE$ bash -c 'ps -o pid,ignored,comm,args -p $$' & wait
[1] 345833
PID IGNORED COMMAND COMMAND
345833 0000000000000000 ps ps -o pid,ignored,comm,args -p 345833
[1]+ Done bash -c 'ps -o pid,ignored,comm,args -p $$'$ (bash -c 'ps -o pid,ignored,comm,args -p $$' & wait)
PID IGNORED COMMAND COMMAND
345629 0000000000000006 ps ps -o pid,ignored,comm,args -p 345629A couple of relevant quotes:Non-builtin commands started by Bash have signal handlers set to the values inherited by the shell from its parent. When job control is not in effect, asynchronous commands ignore SIGINT and SIGQUIT in addition to these inherited handlers. Commands run as a result of command substitution ignore the keyboard-generated job control signals SIGTTIN, SIGTTOU, and SIGTSTP.https://www.gnu.org/software/bash/manual/html_node/Signals.htmlSignals ignored upon entry to the shell cannot be trapped or reset.https://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html#index-trapJob control refers to the ability to selectively stop (suspend) the execution of processes and continue (resume) their execution at a later point. A user typically employs this facility via an interactive interface supplied jointly by the operating system kernel’s terminal driver and Bash.
The shell associates a job with each pipeline. It keeps a table of currently executing jobs, which may be listed with the jobs command. When Bash starts a job asynchronously, it prints a line that looks like:
[1] 25647indicating that this job is job number 1 and that the process ID of the last process in the pipeline associated with this job is 25647. All of the processes in a single pipeline are members of the same job. Bash uses the job abstraction as the basis for job control.
To facilitate the implementation of the user interface to job control, the operating system maintains the notion of a current terminal process group ID. Members of this process group (processes whose process group ID is equal to the current terminal process group ID) receive keyboard-generated signals such as SIGINT. These processes are said to be in the foreground. Background processes are those whose process group ID differs from the terminal’s; such processes are immune to keyboard-generated signals. Only foreground processes are allowed to read from or, if the user so specifies with stty tostop, write to the terminal. Background processes which attempt to read from (write to when stty tostop is in effect) the terminal are sent a SIGTTIN (SIGTTOU) signal by the kernel’s terminal driver, which, unless caught, suspends the process.https://www.gnu.org/software/bash/manual/html_node/Job-Control-Basics.html
|
$ bash -c "trap \"echo INT\" INT; sleep 3" & pid=$!; sleep 1; kill -INT $pid; wait
[1] 27811
INT
[1]+ Done bash -c "trap \"echo INT\" INT; sleep 3"$ (bash -c "trap \"echo INT\" INT; sleep 3" & pid=$!; sleep 1; kill -INT $pid; wait)Can you explain why the SIGINT handler doesn't get invoked in the second case?
| Setting a trap for INT doesn't work in a subshell |
What you discuss is the difference between non-preemptive and preemptive scheduling.
non-preemptive (also called cooperative) is a bit simpler (no timer needed), and it does not need locks when threads communicate. Examples Apple Mac OS9 and earlier, I think early MS-Windows, many embedded systems, and micro-threads.
So yes a timer is needed. However for your question about simplest hardware. Unix needs an MMU and this is far more complex than a timer. (Actually there are Unix like systems that have no MMU and in a lot of situations they work the same (differences are: no security, no swap/paging)
Another way to allow task switching for the while true case. Is to use code injection. The compiler or loader will inject code to cooperatively yield. This can be hard to do: For a loader there may not be enough information to know where is is needed. It may break assumptions of atomicity. Given the right language and compiler, it probably could be done well. However I don't know of any examples.
|
From assembler point of view, when we make a code that just jumps a few commands back, does not jump to any control function that sheduler might use, how can unix interrupt such a code?
I assume it is using timer and interruptions. So the question is then can we implement unix system on a hardware without interruptions, and still solve the infinite loop code, in finite time?
Or in other words, am I right to assume that the only way unix can deal with code like 'while(true){}' is through hardware timer with interrupts?
And if so, what is a minimum requirement for implementing unix-like system on a hardware without hardware timer+interrupts?
| infinite loop VS sheduler |
The easiest approach is probably to have your program use pipe2 three times to create three pipes (for each of stdin, stdout, and stderr). Probably you want them in non-blocking mode. Then fork, and have the child use dup2 to put the pipes into file descriptors 0, 1, and 2. The child then uses one of the exec family to run bash.
The parent can then use select to determine when there is data to be read or space to write.
There are probably libraries or existing implementations you could leverage.
Note #1: pipe2 returns two file descriptors, one for the read side and one for the write side of the pipe. E.g., for bash's stdin, bash needs the read side (to read input) and your program needs the write side (to write bash's input). bash's stdout would be the opposite: bash needs the write side, your program needs the read side.
Note #2: This doesn't give you a full terminal experience; for that you'd need to deal with ptys, which adds a bunch of complexity (and honestly I'd have to look it up). If you want that, I definitely suggest looking for a similar program to start from.
|
I have written a set of programs with the intent of using a radio transmitter-receiver (NRF24L01) to connect two devices as if they were connected via a serial interface.
Currently, i am able to send bash commands in one direction, lets say from device A to B. My A device is currently an AVR microcontroller.
My B device is a Rapberry Pi. I use the following command to pipe the received text to bash. This allows commands to be sent but not for their output to be sent back to the A device.
./program | bashI am not sure how to pipe the output from bash back into my program in a way that will not block and prevent the program from reacting to received data. If it is possible to setup a pipe in both directions, I still do not think I can use functions like fgets as they are blocking.
Both devices share the same library for transmit and receive functionality, these transmit and receive functions can be called with an option to make them non-blocking.
| Writing and Executing Program to behave like console |
Read section 2 in this: Specifying interrupt information for devices ...2) Interrupt controller nodes
A device is marked as an interrupt controller with the "interrupt-controller"
property. This is a empty, boolean property. An additional "#interrupt-cells"
property defines the number of cells needed to specify a single interrupt.
It is the responsibility of the interrupt controller's binding to define the
length and format of the interrupt specifier. The following two variants are
commonly used:
a) one cellThe #interrupt-cells property is set to 1 and the single cell defines the
index of the interrupt within the controller.
Example:
vic: intc@10140000 {
compatible = "arm,versatile-vic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x10140000 0x1000>;
}; sic: intc@10003000 {
compatible = "arm,versatile-sic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x10003000 0x1000>;
interrupt-parent = <&vic>;
interrupts = <31>; /* Cascaded to vic */
};b) two cellsThe #interrupt-cells property is set to 2 and the first cell defines the
index of the interrupt within the controller, while the second cell is used
to specify any of the following flags:bits[3:0] trigger type and level flags
1 = low-to-high edge triggered
2 = high-to-low edge triggered
4 = active high level-sensitive
8 = active low level-sensitiveExample:
i2c@7000c000 {
gpioext: gpio-adnp@41 {
compatible = "ad,gpio-adnp";
reg = <0x41>; interrupt-parent = <&gpio>;
interrupts = <160 1>; gpio-controller;
#gpio-cells = <1>; interrupt-controller;
#interrupt-cells = <2>; nr-gpios = <64>;
}; sx8634@2b {
compatible = "smtc,sx8634";
reg = <0x2b>; interrupt-parent = <&gpioext>;
interrupts = <3 0x8>; #address-cells = <1>;
#size-cells = <0>; threshold = <0x40>;
sensitivity = <7>;
};
};So for the two cell variant, the first number is an index and the second is a bit mask defining the type of the interrupt input.
This part of the device tree is handled by code in drivers/of/irq.c (e.g. of_irq_parse_one()).
The two lines you refer to in the quoted example declare the device (gpio-exp@21) to be an interrupt controller and any other device that wants to use it must provide two cells per interrupt.
Just above those lines is an example of a device specifying an interrupt in another interrupt controller (not this one, but the device with alias gpio), via the two properties interrupt-parent and interrupts (or you could use the new interrupts-extended which allows different interrupt controllers for each interrupt by specifying the parent as the first cell of the property).
|
I'm trying to setup a device tree source file for the first time on my custom platform. On the board is a NXP PCA9555 gpio expander. I'm attempting to setup node for the device and am a bit confused.
Here is where I'm at with the node in the dts file:
ioexp0: gpio-exp@21 {
compatible = "nxp,pca9555";
reg = <21>; interrupt-parent = <&gpio>;
interrupts = <8 0>; gpio-controller;
#gpio-cells = <2>; /*I don't understand the following two lines*/
interrupt-controller;
#interrupt-cells = <2>;
};I got to this point by using the armada-388-gp.dts source as a guide.
My confusion is on what code processes the #interrupt-cells property. The bindings documentation is not very helpful at all for this chip as it doesn't say anything regarding interrupt cell interpretation.
Looking at the pca953x_irq_setup function in the source code for the pca9555 driver - I don't see anywhere that the #interrupt-cells property is handled. Is this handled in the linux interrupt handling code? I'm just confused as to how I'm suppose to know the meaning of the two interrupt cells.
pca953x_irq_setup for your convenience:
static int pca953x_irq_setup(struct pca953x_chip *chip,
int irq_base)
{
struct i2c_client *client = chip->client;
int ret, i; if (client->irq && irq_base != -1
&& (chip->driver_data & PCA_INT)) {
ret = pca953x_read_regs(chip,
chip->regs->input, chip->irq_stat);
if (ret)
return ret; /*
* There is no way to know which GPIO line generated the
* interrupt. We have to rely on the previous read for
* this purpose.
*/
for (i = 0; i < NBANK(chip); i++)
chip->irq_stat[i] &= chip->reg_direction[i];
mutex_init(&chip->irq_lock); ret = devm_request_threaded_irq(&client->dev,
client->irq,
NULL,
pca953x_irq_handler,
IRQF_TRIGGER_LOW | IRQF_ONESHOT |
IRQF_SHARED,
dev_name(&client->dev), chip);
if (ret) {
dev_err(&client->dev, "failed to request irq %d\n",
client->irq);
return ret;
} ret = gpiochip_irqchip_add_nested(&chip->gpio_chip,
&pca953x_irq_chip,
irq_base,
handle_simple_irq,
IRQ_TYPE_NONE);
if (ret) {
dev_err(&client->dev,
"could not connect irqchip to gpiochip\n");
return ret;
} gpiochip_set_nested_irqchip(&chip->gpio_chip,
&pca953x_irq_chip,
client->irq);
} return 0;
}This is my first time working with device tree so I'm hoping it's something obvious that I'm just missing.
UPDATE:
As a clarification - I am working with kernel version 4.12-rc4 at the moment.
I now understand that I was misinterpreting some properties of the device tree. I was previously under the impression that the driver had to specify how all properties were handled. I now see that linux will actually handle many of the generic properties such as gpios or interrupts (which makes a lot of sense).
Here is a bit more of a detailed explanation of how the translation from intspec to IRQ_TYPE* happens:
The function of_irq_parse_one copies the interrupt specifier integers to a struct of_phandle_args here. This arg is then passed to irq_create_of_mapping via a consumer function (e.g. of_irq_get). This function then maps these args to a struct irq_fwspec via of_phandle_args_to_fwspec and passes it's fwspec data to irq_create_fwspec_mapping. These functions are all found in irqdomain.c. At this point the irq will belong to an irq_domain or use the irq_default_domain. As far I can tell - the pca853x driver uses the default domain. This domain is often setup by platform specific code. I found mine by searching for irq_domain_ops on cross reference. A lot of these seem to do simple copying of intspec[1] & IRQ_TYPE_SENSE_MASK to the type variable in irq_create_fwspec_mapping via irq_domain_translate. From here the type is set to the irq's irq_data via irqd_set_trigger_type.
| Confusion regarding #interrupt-cells configuration on PCA9555 expander |
As it is for a UPS, I imagine you can afford to poll the modem signals every 10 seconds or so (from freebsd tty(4)):
int state;
if(ioctl(fd, TIOCMGET, &state)...)
if(state & TIOCM_DTR)...However, if you want to be notified immediately of changes in exactly one
modem signal, namely DCD, you can set the tty flags to clear CLOCAL
(see termios), then a later
open() on the port will hang until modem signal DCD is active. When you
lose DCD you will also get a SIGHUP.
Another mechanism is to connect one of your lines to the input RX data pin. If you pull this high for more than the appropriate character time for the speed you have set, it will generate a framing error. If you set IGNBRK=0 and BRKINT=0 then the driver will place a null byte \0 on the input queue which can unblock a pending read(), if in raw mode.
|
I need to write a program to react to the modem control lines changing in the serial port on my FreeBSD 10.3 machine. I don't want to read any data from it (there won't be any). I'm aware I can configure the port to raise (in my instance) IRQ4 when this happens, but how could my program pick up on the interrupt? Do I have to install a function pointer in the interrupt descriptor table, and if so, how? Or is there something simpler I can piggyback off?
For lack of rep I was unable to comment against plonk's helpful answer here: Viewing (monitoring) line status of a serial port
I did something similar for a parallel port in MS-DOS a while back when I made a digital readout for my milling machine, but as far as I remember that was in real mode and hooked into the IVT, which I suspect will have been simple by comparison.
(Explanation: I inherited a working but simple UPS which lacks a comms port. I figured I could 'read' the panel LEDs and beeper via electrical isolation/level conversion to the control lines in my server's serial port. Basically if anything about the machine's UPS changes from the norm, a shutdown script will be initiated. Crusty, I know, but if I can get it set up fairly quickly it'll save me £100 on a newer one.)
| Hook serial port interrupt in FreeBSD |
From one standpoint : whatever piece of code (microcodes included) running under a Linux system is running as a more or less immediate consequence of some hardware interrupt.
From the opposite standpoint : No hardware interrupt will immediately run anything else than kernel's internal cooking inside its hardware interrupt handler. The interesting part (understand from userland standpoint) will be deffered for later processing by some kernel thread.
And what is responsible for selecting what thread should be elected to run ? : The scheduler !
This being the very first answer to a possible understanding of your question : The scheduler is the component linking processes to interrupts, in that :It will eventually launch the kernel threads (ksoftirqd or whatever irq-dedicated ktrhead is system was booted irq-threaded) for processing the differed parts which would incidentally mark some events pending,
it will then eventually launch the task that was sleeping waiting for this particular event. |
I know that an interrupt is a signal sent to a kernel asking for handling. In some cases we have a physical device like a keyboard with a driver that connects a process with an interrupt(key pressed). But what about timers or other things that don’t have a physical device.
Maybe I have wrong thoughts about this all, and I will be glad if someone corrects me.
| How is the process programmatically connected to an interrupt? |
that output has completed, or Are interrupts [also] used to signify that output is ready?Yes.
Consider writing to a serial port device. The device has a receive buffer called a FIFO, to store a small amount of data e.g. 16 bytes.
There may be an interrupt both whenthe buffer falls empty, and output is complete. This is used to implement tcdrain() on Linux. Allegedly "this function should be used when changing parameters that affect output". E.g. when you want to change the "baud rate" (frequency) of the serial port, you can use this to wait until all buffered data has been transmitted using the current baud rate.
a byte has been transmitted out from the buffer. There is now space available. The device is now ready for the CPU to push another byte in to the buffer.that input data are available, orAre interrupts [also] used to signify that input has completed?Maybe. I'm not sure there's two different things here though, at least in my example.
Consider reading from a serial port device. The device has a transmit buffer called a FIFO, to store a small amount of data e.g. 16 bytes.
When the FIFO has collected at least one byte from the input, the device sends an interrupt. For example it may change from a low to a high voltage on a line connected to the CPU.
The CPU can consume bytes from the buffer, by reading from an IO port or from IO memory.
Sidenote: such transactions may be allowed to take longer than reading from system RAM. To allow this, the IO device must insert "wait states" on the bus. I.e. there is a brief handshake, where it may take several cycles of the bus frequency before the IO device sets a "data ready" bit. Wait states may equally be applied when writing to an IO port / IO memory. However, wait states are only used to cover a known difference in operation frequency/latency between different devices. They are not used to wait for external input or output. This is because they block the CPU from continuing and doing anything else.
So when input is available, an interrupt is signalled. If you like, you can say the input is "completed" when the CPU has read in the input byte. But no interrupt is required to signal this. Just as no interrupt is required to signal that a read from memory is complete.
A condition where the input buffer is full actually seems more like an error condition - it suggests a buffer overflow. That condition may indeed be recorded by the device, allowing the OS to detect the error. However, I don't think there is reason to send an interrupt specifically for overflow. Because the device could already have sent an interrupt when input became available.
|
Operating System Concepts says:During I/O, the various device controllers raise interrupts when they
are ready for service. These interrupts signifythat output has completed, or
that input data are available, or
that a failure has been detected.Are interrupts used to signify that output is ready or input has completed?
If not, do they need to be signified by some other way?
| Are interrupts used to signify that output is ready or input has completed? |
The original PCI had four dedicated interrupt pins, INTA to INTD. IIRC, across the motherboard they would wire them in a rotated manner so if every card used INTA they still wouldn't be the same physical lines. Anyway, in PCI-e, these legacy interrupts are emulated with a specific packet type; there are no dedicated lines any more.
This is telling you that in your current APIC's configuration, INTA will trigger interrupt 17 (IRQ17) to the processor when the peripheral sends one of these packets. Then the appropriate driver can register itself to be called on IRQ17.
|
When I run the command: sudo lspci -vvv, I see the following among the output:
0c:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01)
Subsystem: Dell Wireless 1390 WLAN Mini-Card
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 17
Region 0: Memory at ecffc000 (32-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 2
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=2 PME-
Capabilities: [58] MSI: Enable- Count=1/1 Maskable- 64bit-
Address: 00000000 Data: 0000
Capabilities: [d0] Express (v1) Legacy Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s, Exit Latency L0s <4us, L1 <64us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [100 v1] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
Capabilities: [13c v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01
Status: NegoPending- InProgress-
Kernel driver in use: b43-pci-bridge
Kernel modules: ssb, wlThere is one line of above output which says: Interrupt: pin A routed to IRQ 17, I wonder what is its meaning. Does anybody know any reference which explains the details of that message and interrupts/pins. Thanks.
| What does this mean: "Interrupt: pin A routed to IRQ 17" |
I resolved this issue as I got an error that the following packages have unmet dependencies which we can see in the above picture so I simply download those dependencies by entering command sudo apt install libavfilter7:i386 and I was able to get rid of that problem.
|
I mistakenly broke my pop-OS and when I restart it then I get an error window with **Error occured which system cannot restored. Contact administrator**I already tried apt-get dist-upgrade in recovery but it shows error shown in .
I tried several suggestions but am not able to upgrade it because of this error.
Basically I mistakenly pressed Ctrl+C during upgrading which I wasn't supposed to do.
Please help because I lost all my data otherwise.
| Broken my pop-OS while upgrading |