output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
ldapadduser set the user primary group which is unique. You should use ldapaddusertogroup for secondary ones.
Usually - ldapadduser assumes only one attribute for group-name: # ldapadduser sysuser2 sysusersCan I add add this users in to two groups while creating user? If I try run like: # ldapadduser sysuser2 sysusers,wheel Warning : using command-line passwords, ldapscripts may not be safe Cannot resolve group sysusers,wheel to gid : not foundI got error... If no - can I modify users after adding to make him member of two groups? Using FreeBSD 9.1 and OpenLDAP 2.4.
LDAP: ldapadduser - can I add to two different groups?
Put those users into a group, then use a pam_access rule in /etc/security/access.conf to only allow logins if the user is in that group (and also for root, any sysadmins, and monitoring, if necessary) e.g. + : root wheel nagios : ALL + : yourusergrouphere : ALL - : ALL : ALL
I am trying to set up a couple of Linux workstations (RedHat 7), and I am trying to figure out how to set up authentication against an LDAP server with some unusual requirements. I basically know how to set up LDAP authentication using sssd, but I don't know how to restrict authentication to only certain users to meet my requirement. To enable LDAP configuration, I would use this command line: authconfig --enableldap --enableldapauth --ldapserver="<redacted>" --ldapbasedn="<redacted>" --update --enablemkhomedirThis will allow all LDAP users to log on, and as far as I know works just fine. However, my requirement is that only some users from LDAP can log in, and the list of users will be supplied in a separate text file (by user login name). More information: We have an LDAP server (Active Directory, actually) with a couple thousand users. Only about 20 of them who have a need to work on these workstations should be allowed to log on to these workstation. Unfortunately, LDAP does not include any information related to this, and I do not have control of the LDAP server. Instead, every couple of weeks, I get a text file with a list of the user names who should be allowed to log on. How can I set up authentication to use LDAP for user name/password/user ID etc. while also restricting it to only users on this list?
Using openldap authentication only for some users
You've marked the attribute as operational (with USAGE directoryOperation), hence the error. Operational attributes are not supposed to be modifiable by users; they require code running within OpenLDAP to update them based on some sort of event. Also, I would recommend against altering the standard schemas, such as inetOrgPerson, etc. you should create your own schema.
I have problem with creating my own attribute (eg. dateOfExpire-generalized time) and then with adding this attribute to own ObjecClass (eg. dormitory) and after that add this attribute with ObjectClass to existed schema inetorgperson. This is what I added to inetorgperson.ldif file: olcAttributeTypes: ( 2.5.18.1 NAME 'dateOfExpire' DESC 'RFC4512: indicated the date of account expiry' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SINGLE-VALUE USAGE directoryOperation SUBSTR cas eIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 )and this to inetorgperson.schema file: attributetype ( 2.5.18.1 NAME 'dateOfExpire' DESC 'RFC4512: indicated the date of account expiry' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 SINGLE-VALUE USAGE directoryOperation )objectclass ( 2.5.6.6.1 NAME 'dormitory' DESC 'RFC2256: a person' SUP person STRUCTURAL MUST ( sn $ cn $ dateOfExpire $ name $ uid ) MAY ( userPassword $ telephoneNumber $ seeAlso $ description ) )After that I add this schema with this command: ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f inetorgperson.ldifBut I got only this error: SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 adding new entry "cn=inetorgperson,cn=schema,cn=config" ldap_add: Other (e.g., implementation specific) error (80) additional info: olcAttributeTypes: "2.5.18.1" is operational
Problem with creating own attribute on openldap
Note that for a successful login two things have to work:Name service switch configured in file /etc/nsswitch.conf PAM config as defined in various files in directoy /etc/pam.dSince getent seems to return correct data your /etc/nsswitch.conf seems to be correct. Then I'd check configuration in /etc/pam.d/common* whether it uses module pam_sss.so. And of course you should examine your logs.
I have configured ldap local server running centos 7, using this article: https://www.itzgeek.com/how-tos/linux/centos-how-tos/step-step-openldap-server-configuration-centos-7-rhel-7.html. Now my LDAP server is running without any issue. In my ldap server firewall is disabled. However, selinux is enabled. Also, I migrate my local users to ldap db using migrationtools and it was also successful. Also, I install and configure phpldapadmin and it was also successful. Then I have connected my another ldap server as client (I installed sssd, krb5-workstation, and use authconfig-tui to connect), just for authentication. When I test my ldap connectivity (from clientserver) using [root@ldapclient ~]# getent passwd user1user1:*:1001:1001:user1:/home/user1:/bin/bash[root@ldapclient ~]# id user1uid=1001(user1) gid=1001 groups=1001[root@ldapclient ~]# id testfromphpldapadminuid=1003(testfromphpldapadmin) gid=1010(ldapusers) groups=1010(ldapusers)(testfromphpldapadmin - create from using phpldapadmin user1 - user that migrate using migration tools) according to the previous result, I was thinking that my ldap authentication just works without any issue But when I tried to ssh using that ldap user accounts login as: user1user1@centclient's password:Access denied
ldap users unable to ssh to the server
I believe you're just deleting the user with that command but not all their entries from the OU. It's my understanding that LDAP doesn't maintain linkages with disparate objects like you're thinking, rather you're expected to do ldapsearch's first to produce lists of objects that you then want to act on either using ldapdelete or ldapmodify. We typically will write results from ldapsearch to .ldif files first and then act on them using ldapmodify or ldapdelete. You can however parse the output from ldapsearch and pipe it to ldapmodify as shown in this example from this U&L Q&A titled: ldapdelete, want to remove all UID's of people OU, but preserve OU?. $ ldapsearch -ZZ -W -D 'cn=Manager,dc=site,dc=fake' \ -b 'ou=people,dc=site,dc=fake' -s one dn | \ grep dn: | cut -b 5- | ldapdelete -ZZ -W -D 'cn=Manager,dc=site,dc=fake'I believe you'll need to do something similar, finding all the group's that the user is a memberUid of, then pass that list to ldapmodify, and then run your ldapdelete command once they've been removed from all groups. Incidentally to remove user's from a group: dn: cn=Manager,dc=site,dc=fake changetype: modify delete: memberuid memberuid: johnWith respect to the .ldif files, the examples on this page titled: Managing Users with Lightweight Directory Access Protocol (LDAP) are excellent. They show how to do all the basic operations with .ldif snippets which can be expanded upon to do operations across multiple objects.
I'm currently doing openldap via command line. I added user John and added group devgroup, and I assigned John into devgroup group. When I deleted a user(John) via command line ldapdelete -Y EXTERNAL -H ldapi:/// -D "cn=admin,dc=example,dc=local" "uid=john,dc=example,dc=local"The user is gone but not in previously assigned group member (devgroup). I noticed that the user and the group's assigned user have no linkage. Basically I can add any nonexistence user in the group. Is there way I can link these two? Thanks!
OpenLDAP: Deleted user is still listed in the group
I fixed this warning by reindexing: systemctl stop slapd rm /var/lib/ldap/alock slapindexchown -R ldap:ldap /var/lib/ldap/ systemctl start slapd
I have installed an openldap server with memberof function on centos via slapd.conf: needed part of config?: index objectClass eq,pres index ou,cn,surname,givenname eq,pres,sub index uidNumber,gidNumber,loginShell eq,pres index uid,memberUid eq,pres,sub index nisMapName,nisMapEntry eq,pres,subin openldap logs: SRCH attr=uid displayName mail member Jun 21 15:53:52 rhsfugt001 slapd[26924]: <= bdb_equality_candidates: (memberOf) not indexedi havent found a solution to fix this...
openLDAP bdb_equality_candidates: (memberOf) not indexed
This was due to the incorrect group ID being assigned. When the system was installed freshly, the system arbitrarily assigned the group ID 501 to another group. In all the remaining machines of the lab, we had the group ID 501 assigned to vboxusers. That was the reason, the LDAP users were unable to access the VirtualBox in that particular machine.
We have LDAP and NFS setup in the lab. The lab has 16 machines and a server. All the LDAP users home directory is present in the server. Whenever, the LDAP user logs in from any of the 16 machines, his home is presented from the server in the client machine through the NFS automounting. In all the client machines, we have installed virtualbox and created a group as vboxusers which has all the LDAP users. So, whenever the LDAP user logs in any of the client machine and runs virtual box, he will be able to use the Virtual Box. However, in one of the machine after the fresh installation of RHEL and VirtualBox, when I run the VirtualBox as the LDAP user, I am getting the "guest OS inaccessible" error. I thought it might be some permission issue and so, I reset the permissions to the vboxusers in that machine. However, on further investigation, we found out that the LDAP users are actually not assigned to the group "vboxusers" but rather to some other group. How can this be possible, as I copied the LDAP and NFS configuration files from the working machines in the lab and used the same files in the newly installed machine? EDIT: ldap.conf contents # # LDAP Defaults ## See ldap.conf(5) for details # This file should be world readable but not world writable.#BASE dc=example, dc=com #URI ldap://ldap.example.com ldap://ldap-master.example.com:666#SIZELIMIT 12 #TIMELIMIT 15 #DEREF never URI ldap://192.168.1.10/ BASE dc=xxx,dc=xxx #TLS_CACERTDIR /etc/openldap/cacerts
LDAP user not present in the desired group
Okay got it working. I started all over on new VM, replaced libnss-ldap with libnss-ldapd (mark the d) as it says as comment on the instructions above, only selected passwd, group and shadow during configuration step when install asks which services to configure.
I have installed openLDAP using this https://www.techrepublic.com/article/how-to-install-openldap-and-phpldapadmin-on-ubuntu-server-20-04/on a Ubuntu 20.04 Serverand Ubuntu 20.04 Desktop has a client installed using https://computingforgeeks.com/how-to-configure-ubuntu-as-ldap-client/ Login of local user "client" works and so does LDAP user "evhalen" However sometimes when I open terminal in the client it shows the other user on the prompt, so login in on GUI as "client" and in terminal it shows "evanhalen" as users and confirms this with whoami. I have made 2 local users on the Ubuntu workstation, log off and switching to other local user few times no problem. Problem only occurs once I login with the user I made on the LDAP server, then it does show correct name on home folder but in the terminal it shows the previous user. Also if the screenlocks it shows the user from the terminal prompt instead of the user I login with (and accepts psw for user I logged in as) So basicly login as user 1 , terminal shows user 2 and lockscreen shows user 2. Unlock screen with psw of user 1. Reboot fixes it till I login with LDAP account. All installed on my homelab running VM esxi 6.5 How can I fix this ?
LDAP client mixing up credentials
Interesting, order does matter in PAM. It works if pam_unix come before pam_sss: auth sufficient pam_unix.so try_first_pass nullok auth sufficient pam_sss.so use_first_passpassword sufficient pam_unix.so try_first_pass use_authtok nullok sha512 shadow password sufficient pam_sss.so use_authtok
I'm trying to setup sudo-ldap in a clean CentOS 7 docker environment. I've successfully setup sssd and PAM authentication, and it works. However, sudo-ldap works only if !authenticate is set: dn: cn=test,ou=SUDOers,ou=People,dc=srv,dc=world objectClass: top objectClass: sudoRole cn: test sudoUser: test sudoHost: ALL sudoRunAsUser: ALL sudoCommand: ALL sudoCommand: !/bin/cp sudoOption: !authenticateWhen I run sudo cp, I got the following debug logs: # without !authenticate sudo: searching LDAP for sudoers entries sudo: ldap sudoRunAsUser 'ALL' ... MATCH! sudo: ldap sudoCommand 'ALL' ... MATCH! sudo: ldap sudoCommand '!/bin/cp' ... MATCH! sudo: Command allowed sudo: LDAP entry: 0x55ed4d71b930 sudo: done with LDAP searches sudo: user_matches=true sudo: host_matches=true sudo: sudo_ldap_lookup(0)=0x02[sudo] password for test: Sorry, try again.# with !authenticate sudo: searching LDAP for sudoers entries sudo: ldap sudoRunAsUser 'ALL' ... MATCH! sudo: ldap sudoCommand 'ALL' ... MATCH! sudo: Command allowed sudo: LDAP entry: 0x564d56cb9960 sudo: done with LDAP searches sudo: user_matches=true sudo: host_matches=true sudo: sudo_ldap_lookup(0)=0x02 sudo: removing reusable search result cp: missing file operand Try 'cp --help' for more information.I can use the password to login via SSH, but not able to run sudo command, does anyone know what's wrong? Attached /etc/pam.d/system-auth (sudo is including that file) #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_sss.so use_first_pass auth sufficient pam_unix.so try_first_pass nullok auth required pam_deny.soaccount required pam_unix.sopassword requisite pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type= password sufficient pam_sss.so use_authtok password sufficient pam_unix.so try_first_pass use_authtok nullok sha512 shadow password required pam_deny.sosession optional pam_keyinit.so revoke session required pam_limits.so -session optional pam_systemd.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_sss.so session required pam_mkhomedir.so skel=/etc/skel umask=0022
sudo-ldap works with !authenticate only
olcAccess: {0}to dn.children="ou=Persons,dc=example,dc=com" attrs=entry,uid,cn,userPassword,mail by dn="cn=gitlab,ou=Service Accounts,dc=example,dc=com" tls_ssf=128 read by * none break olcAccess: {1}to dn.subtree="ou=Persons,dc=example,dc=com" by dn="cn=gitlab,ou=Service Accounts,dc=example,dc=com" tls_ssf=128 search by * none breakMichael Ströder was close. I needed an extra search. Additionally, the break was necessary. Apparently otherwise the evaluation already stops before the tree can be iterated down to the correct entries. The tls_ssf=128 is not necessary for this problem, but adds an additional level of security.
I have the following query against my directory: ldapsearch -x -H ldaps://example.com -D "cn=gitlab,ou=Service Accounts,dc=example,dc=com" -w foobar -b "ou=Persons,dc=example,dc=com"With the following olcAccess I get following results: dn: olcDatabase={1}mdb,cn=config olcAccess: {0}to dn.subtree="ou=Persons,dc=example,dc=com" by dn="cn=gitlab,ou=Service Accounts,dc=example,dc=com" read olcAccess: {1}to attrs=userPassword,shadowLastChange by self =xw by anonymous auth olcAccess: {2}to * by self read by * none(rule 1 should be first and it also works like that, but to be sure, I put it down for now) Result: # Persons, example.com dn: ou=Persons,dc=example,dc=com objectClass: organizationalUnit objectClass: top ou: Persons# Hans Wurst, Persons, example.com dn: cn=Hans Wurst,ou=Persons,dc=example,dc=com givenName: Hans sn: Wurst cn: Hans Wurst uid: hwurst userPassword:: <PASSWORDHASH> uidNumber: 1001 gidNumber: 500 homeDirectory: /home/hwurst loginShell: /bin/bash objectClass: inetOrgPerson objectClass: posixAccount objectClass: top# Carla Kaese, Persons, example.com dn: cn=Carla Kaese,ou=Persons,dc=example,dc=com gidNumber: 500 givenName: Carla homeDirectory: /home/ckaese loginShell: /bin/bash objectClass: inetOrgPerson objectClass: posixAccount objectClass: top sn: Kaese uid: ckaese uidNumber: 1000 cn: Carla Kaese userPassword:: <PASSWORDHASH>Now my goal is to restrict the read access to only some attributes. Thus I change the acls as followes: dn: olcDatabase={1}mdb,cn=config olcAccess: {0}to dn.subtree="ou=Persons,dc=example,dc=com" attrs="entry,uid,cn" by dn="cn=gitlab,ou=Service Accounts,dc=example,dc=com" read olcAccess: {1}to attrs=userPassword,shadowLastChange by self =xw by anonymous auth olcAccess: {2}to * by self read by * noneI added attrs="entry,uid,cn" However, the same search now just returns: # search result search: 2 result: 0 SuccessWhat am I doing wrong? What am I missing? How can it work?
Restricting openldap ldapsearch on attributes
Based on https://github.com/openshift/openldap/blob/master/2.4.41/contrib/config/schema/nis.schema (among other references) saying:Depends upon core.schema and cosine.schemayou'll need to include those before including nis.schema: include /opt/local/etc/openldap/schema/core.schema include /opt/local/etc/openldap/schema/cosine.schema # ... include /opt/local/etc/openldap/schema/nis.schema
I have a Mac I have installed openLDAP on (using Macports). I have gotten the system up and am able to create objects. The only schema I have configured in the slapd.conf is core.schema. I am looking to add nis.schema, but when i try this the slapd -d3 command won't work for me. Specifically, it says: 5b994529 @(#) $OpenLDAP: slapd 2.4.28 (Oct 14 2016 16:25:43) $ [emailprotected]:/Library/Caches/com.apple.xbs/Binaries/OpenLDAP/OpenLDAP-523.30.2~39/TempContent/Objects/servers/slapd 5b994529 daemon: SLAP_SOCK_INIT: dtblsize=256 5b994529 daemon_init: listen on ldap:/// 5b994529 daemon_init: 1 listeners to open... ldap_url_parse_ext(ldap:///) 5b994529 daemon: listener initialized ldap:/// 5b994529 daemon_init: 2 listeners opened 5b994529 daemon_init: [0]DNSServiceRegister ldap_create 5b994529 slapd init: initiated server. 5b994529 int pws_auxprop_init(const sasl_utils_t *, int, int *, sasl_auxprop_plug_t **, const char *): entered5b994529 slap_sasl_init: initialized! 5b994529 bdb_back_initialize: initialize BDB backend 5b994529 bdb_back_initialize: Berkeley DB 4.7.25: (May 15, 2008) 5b994529 hdb_back_initialize: initialize HDB backend 5b994529 hdb_back_initialize: Berkeley DB 4.7.25: (May 15, 2008) 5b994529 ==> OD Locales overlay initialize called 5b994529 ==> translucent_initialize 5b994529 slapd destroy: freeing system resources. 5b994529 slapd stopped. 5b994529 connections_destroy: nothing to destroy. tlsst_destroy()I'm unable to locate any logs for this to narrow down the cause. How can I import this schema and still have slapd run successfully? Edit I have run slapd -d-1 for additional logging. I can provide the full log if needed, but I am seeing the following as the likely culprit: 5b9a54a1 /opt/local/etc/openldap/schema/nis.schema: line 203 (objectclass ( 1.3.6.1.1.1.2.6 NAME 'ipHost' DESC 'Abstraction of a host, an IP device' SUP top AUXILIARY MUST ( cn $ ipHostNumber ) MAY ( l $ description $ manager ) )) 5b9a54a1 /opt/local/etc/openldap/schema/nis.schema: line 203 objectclass: AttributeType not found: "manager"Is this a dependency I am missing?
openLDAP won't start after including second schema
The point is that the account cn=Directory Manager is created at installation time and used to run rest of the setup. (I forgot the details but OpenDJ allows to have several such admin entries.) The point is that those admin entities are not subject to any access control or constraints. Especially it can mess up database backend configuration in cn=config. The perfect account to use to shoot yourself in the foot. So I agree with your LDAP admin: Don't do that. Use a personal admin account which is properly authorized by OpenDJ's ACI settings. This also gives you better information in logs and operational attributes about who did what.
I recently joined an organization and got privileges to add/remove entries or say attributes to the LDAP (OpenDJ ldap and opensource LINUX base ldap). So far I have added thousand of modification and attributes with no issues, but the very awkward when I added an attribute to the LDAP which was created soon after I see it look some jumble characters to one of the value (i.e. was IP) and I removed it instantly and corrected that through Directory Manager credentials. My LDAP admin called me and told me not to use Directory Manager access to add the thing it can corrupt the LDAP database using Directory Manager password. I was not convinced and I asked how but did not get an answer. Is it true changing a value for a special group and specially a attribute can damage entire LDAP? Any explanation will be helpful.
OpenDG a Opensource ldap database handling and issues
AFAIK, it's not possible. You can preseed a pre-encrypted password for the root and the first user accounts. You can even do it with the grub password (and a few others too). e.g. d-i passwd/root-password-crypted password [MD5 hash] d-i passwd/user-password-crypted password [MD5 hash] d-i grub-installer/password-crypted password [MD5 hash]but that won't work for ldap-auth-config/rootbindpw because you need the unencrypted password in your LDAP config to connect to the LDAP server. The only thing I can suggest is to use a dummy password in the pre-seed, and script an ssh connection TO the freshly-built new machine, to set the real rootbindpw. This has to be a 'push' operation rather than a 'pull' otherwise you're just shifting the problem from preseed to somewhere else.
I am pxe installing Ubuntu over a network, unattended. I want Ldap installed as well, but I need to provide the ldap db root password in the seed: ldap-auth-config ldap-auth-config/rootbindpw passwordHow can I keep this secure? I don't want to provide the plain text password on this line.
How to provide password in a secure way to LDAP seed?
To have support for LDAP in FreeRadius, please install the corresponding package with the command: sudo apt-get install freeradius-ldapAlso relating to your doubts about mixed versions, to check up the version installed, do: dpkg -l | grep freeradiusand/or: dpkg -l freeradius-ldap
Initially I've installed Freeradius from stable branch as follows: apt-get install python-software-properties apt-add-repository ppa:freeradius/stable-3.0 apt-get update apt-get install freeradius makeAnd I thought, that all modules were also installed; but now, when I need to get Freeradius be authenticated against LDAP-Directory, and I'm trying to reconfigure Freeradius and when I run it in Debugging mode (-X) I see the following error: /etc/freeradius/mods-enabled/ldap: Failed to link to module 'rlm_ldap' : /user/lib/freeradius/rlm_ldap.so: cannot open shared object file: No such file or directory That's why I believe, that LDAP-Module for Freeradius was not be installed. How could I make it from PPA:repository from the same branch, in order to not damage Freeradius and get them both (with LDAP-module) working. Ubuntu Server 16.04.1TLS, Freeradius 3.11 Update1: $dpkg -l | grep freeradius freeradius 3.0.11-ppa3~xenial freeradius-common 3.0.11-ppa3~xenial freeradius-config 3.0.11-ppa3~xenial freeradius-utils 3.0.11-ppa3~xenial libfreeradius3 3.0.11-ppa3~xenial
Installing Freeradius-LDAP 3.x from PPA - Repository
Having multi-master replication in place, modifying the problematic domain entry by just adding a simple description on the working server seemed to solve the issue. The entry got replicated to the troublesome server automatically. dn: dc=example,dc=com changetype: modify add: description description: example -Now it works as expected. I don't know what may have caused the object to disappear though.
I have a couple of LDAP servers, redundant with replication enabled. I'm having trouble with Apache Directory Studio not being able to fetch the base DN of one of these LDAP servers, showing an empty Root DSE. For the other server, however, it shows the whole DIT without problems. I found that the problem is the root node of my tree is missing on the problematic server when I perform an ldapsearch: SERVER-1# ldapsearch -D "cn=manager,dc=example,dc=com" -w pass -LL -b "dc=example,dc=com" -s base version: 1dn: dc=example,dc=com dc: example objectClass: top objectClass: domainSERVER-2# ldapsearch -D "cn=manager,dc=example,dc=com" -w pass -LL -b "dc=example,dc=com" -s base version: 1If I try to add the missing entry, I get an error, because it does exist: # ldapadd -vc -D "cn=manager,dc=example,dc=com" -w pass < domain.ldif ldap_initialize( <DEFAULT> ) add dc: example add objectClass: top domain adding new entry "dc=example,dc=com" ldap_add: Already exists (68)If it does exist, how come it doesn't show in ldapsearch? I don't have any ACLs configured.
Existing LDAP object not showing in ldapsearch
You did not specify changetype: modifyand replace: givenNameIt should have been: sudo ldapmodifyuser 9928892 # About to modify the following entry : dn: uid=9928892,ou=Users,dc=thisplace,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: 9928892 sn: FUJI givenName: GABUTO cn: GABUTO FUJISHITA displayName: GABUTO FUJISHITA uidNumber: 18055 gidNumber: 5000 gecos: GABUTO FUJISHITA loginShell: /bin/bash homeDirectory: /home/9928892 userPassword:: e2NyeXB0fS... shadowLastChange: 17575# Enter your modifications here, end with CTRL-D. dn: uid=9928892,ou=Users,dc=thisplace,dc=com changetype: modify replace: givenName givenName:GAKUTO # Ctrl+D
I can't find an example of how to use the ldapscripts command ldapmodifyuser and I'm not familiar enough with ldapmodify to figure it out. For example, how can I use ldapmodifyuser to change a user's givenName? Here's my attempt: ~$ sudo ldapmodifyuser 9928892 # About to modify the following entry : dn: uid=9928892,ou=Users,dc=thisplace,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: 9928892 sn: FUJI givenName: GABUTO cn: GABUTO FUJISHITA displayName: GABUTO FUJISHITA uidNumber: 18055 gidNumber: 5000 gecos: GABUTO FUJISHITA loginShell: /bin/bash homeDirectory: /home/9928892 userPassword:: e2NyeXB0fS... shadowLastChange: 17575# Enter your modifications here, end with CTRL-D. dn: uid=9928892,ou=Users,dc=thisplace,dc=com givenName:GAKUTO # Ctrl+D Error modifying user entry uid=9928892,ou=Users,dc=thisplace,dc=com in LDAP
how to use ldapmodifyuser from ldapscripts to change a value
On the login page of LDAP Account Manager (online demo here), click on the LAM configuration link in the right top corner. Then choose Edit server profiles followed by Manage server profiles. Here, you can add/rename/delete existing profiles and set a default profile.
I have installed OpenLDAP on Debian 11 as well as LDAP Account Manager 8.2, which seems to work well, but I have a question: On my login page, it says No default profile set. Please set it in the server profile configuration. - but I can't find where to do that. Where do I set this? I have tried googling it, but it just comes up with irrelevant responses.
LDAP Account Manager: Default profile?
You don't need anything other than a normal user account to query Active Directory through LDAP. (Some fields will be inaccessible, but the majority of them are relatively public access.) authacct='[emailprotected]' # Authentication (your AD account) ldapuri='ldap://ad.contoso.com' # Address of any AD server searchbase='dc=contoso,dc=com' # Starting point for searches what='frankfalse' # What text to search for# LDAP query filter filter="(&(objectclass=user)(|(cn=$what*)(userPrincipalName=$what*)(mail=$what*)(proxyAddresses=smtp:$what*)(sn=$what*)(sAMAccountName=$what*)(physicalDeliveryOfficeName=*$what*)(c=$what)))"# Fields to return fields=(sAMAccountName cn mail c targetAddress description title)# Perform the search ldapsearch -W -L -x -H "$ldapuri" -D "$authacct" -b "$searchbase" -s sub "$filter" "${fields[@]}"What you cannot do with ordinary non-privileged access is to differentiate between user accounts that are disabled and those that are active. If you have a suitably privileged account you can add this into the filter immediately after (objectclass=user) to select only active accounts: (!(userAccountControl:1.2.840.113556.1.4.803:=2))The $filter string directly and naively interpolates the $what value. Do not blindly accept $what from an untrusted user.
I need to do practice with LDAP so I think that is a good idea to install a LDAP server only for do some test. For client side I'm using a LInux Mint distribution and I have installed all the software packages as I found in this link. In my company is available an Active Directory service, but obviously my user has not administrator privileges needed for the following command: sudo realm join domain.tld -U domain_administrator --verboseSo I was thinking to install a LDAP server to do tests. Someone know a LDAP server suitable for my purposes? Thanks.At this link I have found a tutorial to install OpenLDAP Server and OpenLDAP Client on Linux Mint. The installation of the OpenLDAP Server is possible, as always, by apt: sudo apt update sudo apt -y install slapd ldap-utils After that the Server must be configured and the link gives some details. Obviously this article is only an introduction but it is enough to start studying an LDAP Server.
What LDAP server can I install only for test LDAP authentication?
create an LDIF for each object like this: dn: uid=cx,ou=group1,ou=People,dc=company,dc=com loginShell: /bin/bash objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount userPassword:: ...pwhash..... cn: Your Name gecos: The Gecos field (infos),,, gidNumber: 100 uid: xy homeDirectory: /home/xy uidNumber: 1040Then add the thing to the ldap server like this: ldapadd -v -U admin@fs -h 127.0.0.1 -a -f /path/to/ldiffile.ldifYou might have to play with the authentication of your initial admin user (Maybe user an DN to auth .... depends on your intial setup) HTH, derjohn
I'm in a Docker, in a VM (Ubuntu Serv). I have created a OpenLDAP server. I want to know simply : how to create groups and users. My DIT tree : dc=company,dc=com ou=group1 ou=group2 ou=group3 ou=group4 and in each groupe I have many users : cn=user1 cn=user2 etc... Thank you
How I can create accounts with command lines for my OpenLDAP server?
It is also required to change in /etc/openldap/slapd.d/cn=config/olcDatabase={0}config.ldif file next lines: # olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage by dn.base="cn=admin,dc=mydomain,dc=com" manage by * noneThere is also added manage by dn.base="cn=admin,dc=mydomain,dc=com".
Installed OpenLDAP with this command # yum -y install openldap openldap-clients openldap-serversCopied reference data structures: # cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIGGenerated a password hash for 'test' by: # slappasswdIn file /etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb.ldif added: # olcRootPW: {SSHA}5lPFVw19zeh7LT53hQH69znzj8TuBrLv # olcSuffix: dc=mydomain,dc=com # olcRootDN: cn=admin,dc=mydomain,dc=comIn file /etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif added: # olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth" read by dn.base="cn=admin,dc=mydomain,dc=com" read by * noneIn file /etc/openldap/slapd.d/cn=config/olcDatabase={0}config.ldif added: # olcRootDN: cn=admin,dc=mydomain,dc=comAfter all that steps I started service by sudo service slapd start command. Now I want to add some basic schema: # ldapadd -f /etc/openldap/schema/core.ldif -D cn=admin,dc=mydomain,dc=com -w testAnd at this step I get an error: # adding new entry "cn=core,cn=schema,cn=config" # ldap_add: Insufficient access (50)Why I actually get this error if I use olcRootDN?
CentOS 7: ldap_add: Insufficient access (50)
For some reason /etc/nslcd.conf was not created during installation. I copied it from another Ubuntu 10 Server which had working LDAP setup, but it wouldn't start because of the line >> #nss_initgroups_ignoreusers ALLLOCALWhich is odd, because its on the other Ubuntu 10 server, which I setup as well with the same configs, but thats linux. Regardless, (I commented it out) its working now!
So I have setup LDAP Login on every server at my work successfully except one. Of course there has to be that one! And I want to close my jira ticket, but I can't figure out what the issue is. The system is a Ubuntu 10 x32 Here is the output of the auth.log Oct 29 10:56:33 localhost sshd[2560]: Invalid user LDAPUSERNAME from 10.1.11.224 Oct 29 10:56:33 localhost sshd[2560]: Failed none for invalid user LDAPUSERNAME from 10.1.11.224 port 51830 ssh2 Oct 29 10:56:36 localhost sshd[2560]: pam_unix(sshd:auth): check pass; user unknown Oct 29 10:56:36 localhost sshd[2560]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.1.11.224 Oct 29 10:56:36 localhost sshd[2560]: pam_ldap: error trying to bind as user "uid=LDAPUSERNAME,ou=People,dc=DOMAIN,dc=com" (Invalid credentials) Oct 29 10:56:38 localhost sshd[2560]: Failed password for invalid user LDAPUSERNAME from 10.1.11.224 port 51830 ssh2UPDATE: This is a successfull login on another server and the output of auth.log Oct 29 11:23:56 daily sshd[20625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.1.11.224 user=LDAPUSERNAME Oct 29 11:23:56 daily sshd[20625]: Accepted password for LDAPUSERNAME from 10.1.11.224 port 52211 ssh2 Oct 29 11:23:56 daily sshd[20625]: pam_unix(sshd:session): session opened for user LDAPUSERNAME by (uid=0)
Ldap SSH Login not working - Same configs worked on 20+ other servers - Ubuntu
The recipe meta-openembedded/meta-oe/recipes-support/openldap_2.4.50.bb present in my yocto base system build containes many packages and not only the package openldap. One of the other is package is openldap-bin and this is the package which adds ldapsearch to the image. So I have changed the assignment to IMAGE_INSTALL as below: IMAGE_INSTALL += "openldap openldap-bin"By this modification my linux distribution contains ldapsearch (and other utilities).
I have successfully added the recipe openldap to my yocto-base Linux distribution, by the instruction: IMAGE_INSTALL += "openldap"After that I've created a path/to/my-layer/recipes-support/openldap/openldap_%.bbappend file and put in it the instruction: INSANE_SKIP_${PN} += "already-stripped"The previous setting specifies to the Quality Assurance (QA) checks what to skip and in this case (see Yocto manual about insane.bbclass) we ask to skip:already-stripped: Checks that produced binaries have not already been stripped prior to the build system extracting debug symbols. It is common for upstream software projects to default to stripping debug symbols for output binaries. In order for debugging to work on the target using -dbg packages, this stripping must be disabled.Without the previous instruction the compilation of openldap fails with this error: ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapcompare' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapdelete' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapexop' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapmodify' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapmodrdn' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldappasswd' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapsearch' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapurl' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/bin/ldapwhoami' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: QA Issue: File '/usr/sbin/slapd' from openldap was already stripped, this will prevent future debugging! [already-stripped] ERROR: openldap-2.4.50-r0 do_package: Fatal QA errors found, failing task.The compilation process produces the binary for the utility ldapsearch but this binary isn't installed into the image. Instead I'm sure that openldap results correctly installed into the distribution. I don't find any method to add ldapsearch (and other utilities correctly compiled) to the image. Could someone help me?
How to add utility ldapsearch to yocto image?
Okay found a solution : an entry point of type meta several subentries of type ldap with suffixmassaging.
I have configured an OpenLDAP 2.4.23 as a proxy to multiple separate Active Directory, it works fine when each AD as a different suffix/search base. I have an use case to fullfil : one application server is only able to check ONE LDAP server and it allows only to check ONE search base, and the users are in several seperate Active Directory. So I would like to configure OpenLDAP (or any other free directory) to work as a proxy and it should try each Active Directory to find is the user exists there, if not check the next Active Directory. I have an unique key for each user (so no problem with duplicate). I have already tried :to set multiple databases with the same suffix but openldap is not happy with that config to use meta (but the search base cannot be the same for two entries)If you have any idea how to proceed I would be grateful.
LDAP : one suffix : search multiple separate Active Directory
I didn't post this until I had searched for days, and I just now found the answer. If no one else finds this useful, I'll end up deleting, but here it is: https://forums.freebsd.org/threads/58365/ Basically, if networking isn't up yet, then it cannot bind and will fail. The solution is to edit /usr/local/etc/rc.d/slapd and change this line: # REQUIRE: FILESYSTEMS ldconfigTo: # REQUIRE: FILESYSTEMS ldconfig NETWORKINGThis ensures networking is loaded prior to attempting to start slapd.
I can successfully start slapd on FreeBSD 11 perfectly fine, but it won't run on startup. Here is what I put in my rc.conf: slapd_enable="YES" slapd_flags="-h "ldap://1.2.3.4/ ldapi://%2fvar%2frun%2fopenldap%2fldapi/"" slapd_sockets="/var/run/openldap/ldapi"1.2.3.4 is replaced with my actual public IP. I have tried many permutations of the valid options for slapd_flags and slapd_sockets, but every time I reboot slapd is not running. How do I ensure slapd runs at system startup?
slapd doesn't start automatically despite rc.conf entry
In general, you should use the distribution's package manager (apt) to install software, and proper configuration management tools (eg, Ansible/Chef/Puppet as mentioned in a comment, or Debian's local debconf) to propagate site-specific information and files. Copying files in bulk is not a good approach. If you have recently configured one machine by hand and want to replicate that setup on other machines, the blueprint tool might be just what you need. It will identify a list of packages installed (using apt-get in the case of Debian), configuration files (eg, in /etc/), and even any binaries compiled and installed manually in /usr/local/. This tool can be used to create chef and puppet scripts automatically; otherwise you will need to encode all the installation and configuration steps you already did by hand. It may be too late in your case, but in the future you might want to install the etckeeper package for debian, which will automatically version control the contents of /etc/, which makes identifying and documenting your changes (including the installation/upgrade/removal) of packages much easier. If you will be installing Debian from scratch on more than a couple machines, you might want to look in to tools for debian "pre-seeding", or custom image generation tools like FAI or Debian Live. See also answers to "automated linux deployment and config management at small scale - is it worth it?".
I have several machines running Debian that I'm configuring to work with Kerberos and LDAP. I thought I would automate using rsync. At first I tried a basic rsync clone excluding directories and files such as /run, /sys, /etc/fstab, /etc/hosts, etc. That failed -- somehow, some file specifying a UUID got copied (and given the files I omitted, I'm unsure what it could have been). So I decided a more refined approach - using find -mmin -90 to locate all the files altered within the last 90 minutes (excluding /proc, /dev, etc.). However, that too failed with an unclear to me transfer of UUID specification.
How to automate (copy) LDAP/Kereberos install
You need to import the schema for inetOrgPerson into slapd. I have no idea about OpenLDAP installation on CentOS 7, but if you have a file /etc/ldap/schema/inetorgperson.ldif and dynamic slapd configuration (/etc/ldap/slapd.d/), it might accept the following command (run as root). ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/ldap/schema/inetorgperson.ldif
I keep getting an invalid syntax error when trying to create a user in OpenLDAP (CentOS 7). This is a new install of OpenLDAP for testing purposes. So far I've managed to create a group called "Lab Staff", and now I'm trying to add a user to it Here is the LDIF file: dn: uid=lsuarez,ou=Lab Staff,dc=sftest,dc=net objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: Luis Suarez sn: Suarez givenname: Luis uid: lsuarez ou: Lab StaffThen I try to add it as follows: ldapadd -x -D "cn=Admin,dc=sftest,dc=net" -W -f /tmp/data.ldif Enter LDAP Password: adding new entry "uid=lsuarez,ou=Lab Staff,dc=sftest,dc=net"I get the error message: ldap_add: Invalid syntax (21) additional info: objectclass: value #3 invalid per syntaxThat looks to me like it doesn't like inetOrgPerson, but I have no idea what I'm doing wrong.
OpenLDAP: Invalid syntax error when trying to add LDIF
I think you cannot implement individual overrides with map directive in nslcd.conf(5). Such a mapping is applied to whole passwd map. However depending on the order of module names in /etc/nsswitch.conf you could set a local passwd entry with different home directory which has higher precedence in file /etc/passwd. Example line in /etc/nsswitch.conf: passwd: files ldapMake sure to keep rest of attributes consistent! IIRC newer version of sssd has a CLI tool sss_override(8) which allows to set individual values for certain users. But ask yourself: Do you really want to maintain that mess? Being you I'd first talk to the users that they should rather use env var $HOME in their scripts instead.
I'm using openldap + nslcd to connect to a LDAP server for authentication of some users (these users would want their passwords and most of their configuration shared over many devices). I don't control the LDAP server. However, the synchronization came after the users already had dual accounts, so names of home folders don't match (and it's not wise to move them due to possible hardcoded paths in their scripts). I'm considering a hard linked directory, but I want to know if there is a way to override home folder for a specific user, which seems cleaner and sounds like it should be a common use case. I was unpleasantly surprised that nslcd.conf seems to accept only a single filter (per map=passwd), and map directives will simply replace the home for all users at once. Is there a way around to elegantly "fix" single entries after LDAP lookup? My search mostly encountered answers that replace the pattern for all users, or give unhelpful answers (such as this one which simply overrides the previous filter). What I think I need: filter passwd (&(objectClass=inetOrgPerson)(<redacted>)) # the last part is wishful thinking, not actual syntax map passwd homeDirectory "<redacted>" if (uid=<redacted>)I'm new at LDAP so it's possible I don't entirely understand the order in which the transactions happen and whether it's PAM or nslcd that should do this. I realize this is not the best way to handle the situation properly, but I'd still like to know if it can be done.
LDAP per-user overrides
You can do this by creating appropriate ACLs in your directory. Take a look at this forum thread in which the OP wants to have an IP address-based (and also filter based) access control to the directory. There are examples for IP-based ACLs which might help you. Perhaps, something like this: access to * by peername.ip=10.10.0.0 read by * noneOf course, don't forget to read the OpenLDAP slapd.access manual.
In organization we have setup ldap using openldap, i access it with GUI phpldapadmin. we have one requirement to allow access some user from specific IPAddress. I searched but still not able to get the exact solution. example.ldif dn: cn=xyz,ou=Person,dc=example,dc=com cn: xyz gidnumber: 570 homedirectory: /home/users/xyz iphostnumber: 10.10.0.0 loginshell: /sbin/nologin mail: [emailprotected] objectclass: inetOrgPerson objectclass: posixAccount objectclass: top objectclass: ipHost postaladdress: 123xyz sn: XYZ uid: xyz uidnumber: 1012 so we want xyz user should able to access openldap from iphostnumber: 10.10.0.0
How to restrict user based on ip address in openldap
ldapmodify is telling you that the word Defaultpolicy is not a valid value for the element pwdPolicySubentry. To fix this you need to identify what your schema tells you the valid values can be, and use one of those valid values. Here is an example value: pwdPolicySubentry: cn=Default Password Policy,cn=Password Policies,cn=config
I want to apply password policy to one particular user, using an LDIF file. Here is my test.ldif file: dn: cn=purval,ou=Users,dc=xxx,dc=com changetype: modify add: pwdPolicySubentry pwdPolicySubentry: DefaultpolicyThe command is: ldapmodify -x -D "cn=admin,dc=xxx,dc=com" -w password -f /tmp/addpolicy.ldifThe error display is: modifying entry "cn=purval,ou=Users,dc=xxx,dc=com" ldap_modify: Invalid syntax (21) additional info: pwdPolicySubentry: value #0 invalid per syntax
error during applying password policy with ldapmodify
If I understood correctly, nscd caches password for 10 min for each login.No, it only caches the user information that would be found in /etc/passwd, which is generally everything except the password. That would be the shadow map – but most LDAP configurations do not use shadow; they don't reveal the password hash to nsswitch at all.does this give me the ability to login, within 24 hrs, when ldap server is down?Most likely not. In nearly all setups, login through LDAP is not done through nsswitch (i.e. not by retrieving the password hash for local verification); it's done by PAM actively contacting the LDAP server and having it verify the password. The only way to cache that is via PAM. If your pam_ldap module doesn't have caching built-in, the pam_ccache module can be inserted above it to provide something similar. Alternatively, if you are really looking for redundancy, then set up a second LDAP server. Use OpenLDAP's "Syncrepl" to have it automatically replicate data (optionally even in both directions). (It's pretty odd that your LDAP server is more susceptible to power outages than the rest of your network, but if e.g. there are multiple circuits, then place the 2nd server on the circuit that has fewer outages...)
as I mentioned in another thread, I have an LDAP system supporting two dozen Linux servers. When LDAP server is down for various reasons (firewall rule changes, power outage etc), my rest of the systems became hanged. I am hoping to build some redundancy, and stumbled upon articles on using nscd or sssd for caching logins locally. all my servers have nscd installed, with the following settings enable-cache passwd yes positive-time-to-live passwd 600these are default settings. If I understood correctly, nscd caches password for 10 min for each login. to handle unexpected ldap server downtime, I am wondering if I can increase the positive-time-to-live to a larger number, say a full day (86400 s). does this give me the ability to login, within 24 hrs, when ldap server is down? is there any risk doing this? I saw in various threads that sssd serves similar purposes, although I tried it, and it broke my PAM settings and make all login "permission denied". without being able to figure out why, I decided to remove sssd for now (), and focus on using nscd instead. all my servers run ubuntu on various LTS versions
Understanding risks of setting nscd positive-time-to-live to a longer duration
Use ldapadd to create a "normal" LDAP entry with this DN, with whatever attributes you like. The DN will retain its "superuser" privileges. (However, once it becomes a normal entry, its password will be checked against the 'userPassword' attribute – no longer against the rootPW from configuration. So when you're creating this entry, don't forget to set a userPassword with help of slappasswd.)
im trying to configure an LDAP using slapd and PhpLdapAdmin using the non-official version from github on my Ubuntu server. the slapd and phpldapadmin works perfectly but i have a problem while login using the admin user from the slapd, phpldapadmin want to use email for login. but the slapd dont have a mail attribute for the admin user. so my question is how to give a mail attribute for admin user or is there any solution to login phpLdapAdmin? Thanks for Reading my question, Here the screenshot of the login problemThe Second screenshot is when i input the email using [emailprotected]
Cant login on PhpLdapAdmin using admin user
The first problem I see is that you're trying to create an object with this distinguishedName: dn: cn=ekcrlegalofficerThat's invalid; the schema needs to exist "inside" an existing hierarchy. The "no global superior knowledge" error means "I have no idea where to place this object". Take a look at the existing schemas, which will look something like: dn: cn={0}core,cn=schema,cn=configYou probably want your dn to look like: dn: cn=ekcrlegalofficer,cn=schema,cn=config
I have created new schema which looks like this attributetype ( 2.25.3236588 NAME 'x-candidateNumber' DESC 'Candidate number' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.3536282 NAME 'x-candidateFullName' DESC 'Candidate name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.6587875 NAME 'x-candidateTitleBeforeName' DESC 'Candidate title before name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.6164147 NAME 'x-candidateTitleAfterName' DESC 'Candidate title after name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.1702122 NAME 'x-candidateBirthNumber' DESC 'Candidate title after name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.3134432 NAME 'x-candidateListedAt' DESC 'Candidate listed at' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.3682754 NAME 'x-candidateErasedAt' DESC 'Candidate erased at' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.5497561 NAME 'x-candidateNote' DESC 'Candidate note' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )attributetype ( 2.25.9736218 NAME 'x-candidateStatus' DESC 'Candidate status' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )objectclass ( 2.25.1798306 NAME 'ekcrCandidate' DESC 'RFC1274: simple security object' SUP ( top $ person $ organizationalPerson $ inetOrgPerson ) STRUCTURAL MUST (cn $ ou) MAY ( x-candidateNumber $ x-candidateFullName $ x-candidateTitleBeforeName $ x-candidateBirthNumber $ x-candidateTitleAfterName $ x-candidateListedAt $ x-candidateErasedAt $ x-candidateNote $ x-candidateStatus ))I added this schema into schema_convert.conf file: include /etc/ldap/schema/core.schema include /etc/ldap/schema/collective.schema include /etc/ldap/schema/corba.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/duaconf.schema include /etc/ldap/schema/dyngroup.schema include /etc/ldap/schema/inetorgperson.schema include /etc/ldap/schema/java.schema include /etc/ldap/schema/misc.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/openldap.schema include /etc/ldap/schema/pmi.schema include /etc/ldap/schema/ekcrconcipient.schema include /etc/ldap/schema/ekcrcandidate.schema include /etc/ldap/schema/ekcrlegalofficer.schemaThen I converted the schema into an ldif file slaptest -f schema_convert.conf -F /tmp/ldif_outputIt generated a file which I modified as explained here in step 4. Resulting cn={14}ekcrlegalofficer.ldif file now looks like this: dn: cn=ekcrlegalofficer objectClass: olcSchemaConfig cn: ekcrlegalofficer olcAttributeTypes: {0}( 2.25.7702021 NAME 'x-legalOfficerNumber' DESC 'Legal o fficer number' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{ 32768} ) olcAttributeTypes: {1}( 2.25.960171 NAME 'x-legalOfficerFullName' DESC 'Legal officer name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{3 2768} ) olcAttributeTypes: {2}( 2.25.196694 NAME 'x-legalOfficerTitleBeforeName' DESC 'Legal officer title before name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1 .1466.115.121.1.15{32768} ) olcAttributeTypes: {3}( 2.25.7643140 NAME 'x-legalOfficerTitleAfterName' DESC 'Legal officer title after name' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1. 1466.115.121.1.15{32768} ) olcAttributeTypes: {4}( 2.25.1064416 NAME 'x-legalOfficerListedAt' DESC 'Legal officer listed at' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121. 1.15{32768} ) olcAttributeTypes: {5}( 2.25.1005975 NAME 'x-legalOfficerErasedAt' DESC 'Legal Officer erased at' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121. 1.15{32768} ) olcAttributeTypes: {6}( 2.25.5513419 NAME 'x-legalOfficerNote' DESC 'Legal Off icer note' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{3276 8} ) olcAttributeTypes: {7}( 2.25.4535859 NAME 'x-legalOfficerStatus' DESC 'Legal O fficer status' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{ 32768} ) olcObjectClasses: {0}( 2.25.6182638 NAME 'ekcrLegalOfficer' DESC 'RFC1274: sim ple security object' SUP ( top $ person $ organizationalPerson $ inetOrgPerso n ) STRUCTURAL MUST ( cn $ ou ) MAY ( x-legalOfficerNumber $ x-legalOfficerFu llName $ x-legalOfficerTitleBeforeName $ x-legalOfficerTitleAfterName $ x-leg alOfficerListedAt $ x-legalOfficerErasedAt $ x-legalOfficerNote $ x-legalOffi cerStatus ) )Then I tried to add this new objectClass with ldapadd -D "cn=admin,cn=config" -W -f cn={14}ekcrlegalofficer.ldifwhich resulted in this ldap_add: Server is unwilling to perform (53) additional info: no global superior knowledgeI understood, this error might occur when you are trying to add new record into wrong database but since I'm trying to create new object class this shouldn't be my case. It actually worked for me in the past, but then I reconfigured my openLDAP server using dpkg-reconfigre slapdand since then I'm facing this issue. I'm stucked on this on third day now and I'm really desperate, I would be really gratefull for any help.
Getting "ldap_add: Server is unwilling to perform (53) additional info: no global superior knowledge" when trying to add new objectClass to openLDAP
See the slapd-config(5) manual page. Nearly all settings retain the same names in the LDAP configuration backend, only with the olc namespace prefix. (And you're really supposed to edit these via LDAP or at least via slapmodify, not by hand.) olcLogFile: <filename> Specify a file for recording slapd debug messages. By default these mes‐ sages only go to stderr, are not recorded anywhere else, and are unre‐ lated to messages exposed by the olcLogLevel configuration parameter. Specifying a logfile copies messages to both stderr and the logfile. olcLogFileFormat: debug | syslog-utc | syslog-localtime Specify the prefix format for messages written to the logfile. The debug [...] olcLogLevel: <integer> [...] Specify the level at which debugging statements and operation statistics should be syslogged (currently logged to the syslogd(8) LOG_LOCAL4 facil‐
I have set up openLDAP 2.4 on Debian 11, and I want to change a few parameters, like the loglevel and logfile, which appears to be really simple: # man slapd.conf SLAPD.CONF(5) File Formats Manual SLAPD.CONF(5)NAME slapd.conf - configuration file for slapd, the stand-alone LDAP daemonSYNOPSIS /etc/ldap/slapd.confDESCRIPTION The file /etc/ldap/slapd.conf contains configuration information for the slapd(8) daemon. This configuration file is also used by the SLAPD tools slapacl(8), slapadd(8), slapauth(8), slapcat(8), slapdn(8), slapindex(8), and slaptest(8). ... logfile <filename> Specify a file for recording debug log messages. By default these messages only go to stderr and are not recorded anywhere else. Specifying a logfile copies messages to both stderr and the logfile. loglevel <integer> [...] Specify the level at which debugging statements and operation statistics should be syslogged (currently logged to the syslogd(8) LOG_LOCAL4 facility). They must be considered subsystems rather than increasingly verbose log levels. Some messages with higher priority are logged ...except that there is no file called slapd.conf; it has been replaced by /etc/ldap/slapd.d, which is much tidier. However, I don't find a description of how the slapd.conf parameters fit into /etc/ldap/slapd.d; there is a parameter called olcLogLevel in /etc/ldap/slapd.d/cn=config.ldif that may well the one, but how about everything else from slapd.conf? Anyway, my question: how can I change the loglevel and logfile in the new scheme?
How to set loglevel in slapd.d?
The dialog boxes were likely from debconf to help configure the installed packages. If so, the options were saved into the debconf database and you should be able to see them using debconf-get-selections | grep ldap. You can change the options using debconf-set-selections. The debconf options should be removed when running apt purge $PACKAGE, but if that is not working you can manually purge debconf options using echo PURGE | debconf-comminucate $PACKAGE where $PACKAGE is the name of the package as shown in debconf. While testing this, I noticed that echo PURGE | debconf-comminucate libnss-ldap did not remove entries for libnss-ldap:amd64 so I had to run echo PURGE | debconf-comminucate libnss-ldap:amd64 to remove those as well. Once the options are removed, attempting to install the packages again interactively should prompt with the dialog boxes again.
I have a system with Debian 11, and want to experiment with setting it up as an LDAP client for user authentication, following this: https://linuxhint.com/configure-ldap-client-debian/. However, about half-way through the configuration, that happens as part of the installation, I pressed the wrong key, which terminated the dialog I was in, and it went ahead and installed stuff. I tried to remove the installation with apt purge libnss-ldap libpam-ldap ldap-utils nscd and install from the beginning - but now, it just powers through without showing the dialogs. I have tried to find all LDAP related files with find / -iname "*ldap*", so I can remove them, but there doesn't seem to be anything relevant; what do I need to do to be able to re-install as if it had never been done before?
how to re-install LDAP client in Debian?
Your first error is: unable to open pid file "/var/run/openldap/slapd.pid": 2 (No such file or directory)There are a couple of ways to resolve this error. Fix the filesystem slapd is trying to write a pid file to /var/run/openldap/slapd.pid, but the directory /var/run/openldap doesn't exist. /var/run is a symlink to /run, which is an ephemeral directory: it is re-created every time the system boots. To create a directory in /run, you can use systemd-tmpfiles. In /etc/tmpfiles.d, create a file slapd.conf with the following content: D /run/openldap 0755 ldap ldapThen run: systemd-tmpfiles --createThis will ensure that /var/run/openldap exists and that it gets created when the system boots. You will need to update your slapd systemd unit to use the correct path: [Service] Type=forking PIDFile=/var/run/openldap/slapd.pid Environment="SLAPD_URLS=ldap:/// ldapi:/// ldaps:///" Environment="SLAPD_OPTIONS=-F /etc/openldap/slapd.d" ExecStart=/usr/libexec/slapd -u ldap -g ldap -h ${SLAPD_URLS} $SLAPD_OPTIONSRemove PidFile from your configuration Your slapd unit file is using the PIDFile directive because you're running slapd as Type=forking. From the `systemd.service(5) man page:PIDFile= Takes a path referring to the PID file of the service. Usage of this option is recommended for services where Type= is set to forking. The path specified typically points to a file below /run/. If a relative path is specified it is hence prefixed with /run/. The service manager will read the PID of the main process of the service from this file after start-up of the service.So if we don't need to use Type=forking, we can remove the PIDFile configuration here and the corresponding PidFile configuration in slapd. We modify the slapd command line to include -d0, which causes slapd to run in the foreground: [Service] Type=simple Environment="SLAPD_URLS=ldap:/// ldapi:/// ldaps:///" Environment="SLAPD_OPTIONS=-F /etc/openldap/slapd.d" ExecStart=/usr/libexec/slapd -d0 -u ldap -g ldap -h ${SLAPD_URLS} $SLAPD_OPTIONSAnd then remove your PidFile setting from slapd.conf (or the olcPidFile setting from cn=config).
I have compiled the current version of OpenLDAP on a fresh RHEL 8 instance and am setting it up with a signed SSL certificate. When I start slapd, I get unable to open pid file "/var/run/openldap/slapd.pid": 2 (No such file or directory). Surprise, surprise, the openldap directory does not exist. I created the directory and set ownership to ldap:ldap. Now when I start slapd, I get Can't open PID file /var/lib/openldap/slapd.pid (yet?) after start: No such file or directory. Shouldn't the service be creating the pid file? I tried troubleshooting by doing slapd -u ldap -g ldap -d 255 but it doesn't return any errors. It starts up slapd and then hangs indefinitely. Here's the output: 632b738b.28af1a7e 0x7fb91fe62840 slapd starting 632b738b.28b084a2 0x7fb918147700 daemon: added 4r listener=(nil) 632b738b.28b0e5e8 0x7fb918147700 daemon: added 7r listener=0x1789270 632b738b.28b11145 0x7fb918147700 daemon: added 8r listener=0x1789360 632b738b.28b2645c 0x7fb918147700 daemon: epoll: listen=7 active_threads=0 tvp=zero 632b738b.28b27b69 0x7fb918147700 daemon: epoll: listen=8 active_threads=0 tvp=zero 632b738b.28b28d61 0x7fb918147700 daemon: activity on 1 descriptor 632b738b.28b2a342 0x7fb918147700 daemon: activity on:632b738b.28b2aaae 0x7fb918147700 632b738b.28b2c02b 0x7fb918147700 daemon: epoll: listen=7 active_threads=0 tvp=zero 632b738b.28b2d2eb 0x7fb918147700 daemon: epoll: listen=8 active_threads=0 tvp=zeroAny idea as to what to try next? Here's my configure if it helps: ./configure --prefix=/usr --sysconfdir=/etc --disable-static --enable-debug --with-tls=openssl --with-cyrus-sasl --enable-dynamic --enable-crypt --enable-spasswd --enable-slapd --enable-modules --enable-rlookups --enable-backends=mod --disable-ndb --disable-sql --disable-shell --disable-bdb --disable-hdb --enable-overlays=modslapd.service [Unit] Description=OpenLDAP Server Daemon After=syslog.target network-online.target Documentation=man:slapd Documentation=man:slapd-mdb[Service] Type=forking PIDFile=/var/lib/openldap/slapd.pid Environment="SLAPD_URLS=ldap:/// ldapi:/// ldaps:///" Environment="SLAPD_OPTIONS=-F /etc/openldap/slapd.d" ExecStart=/usr/libexec/slapd -u ldap -g ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS[Install] WantedBy=multi-user.target
Installing OpenLDAP on RHEL 8 -- slapd.pid problem
Using OpenLDAP 2.5.13 from https://ltb-project.org/documentation/index.html on CentOS 8 stream, I'm able to get your LDIF to load with the following changes:I commented out all of the path-related configurations from cn=config because they didn't apply on my system, and I didn't want to bother setting up certificates: #olcArgsFile: /var/run/openldap/slapd.args #olcPidFile: /var/run/openldap/slapd.pid #olcTLSCACertificatePath: /etc/openldap/certs #olcTLSCertificateFile: /etc/openldap/certs/1d40117d24e9b169.pem #olcTLSCertificateKeyFile: /etc/openldap/certs/yln.keyI need to explicitly load the back_mdb module: dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulePath: /usr/local/openldap/libexec/openldap/ olcModuleLoad: back_mdb.soThis may not be necessary if it has been compiled into your local build.I replaced all of the schemas with the appropriate content from my local openldap/schemas/ directory.I fixed a syntax error in one of your olcAccess rules. You had: olcAccess: {2}to * by self write by dn="cn=admin,dc=yln,dc=info" write by * readIt looks as if you dropped a trailing space ( ) there at some point; as written, that unfolds to: olcAccess: {2}to * by self write by dn="cn=admin,dc=yln,dc=info" write by *readWhere that *read on the end is invalid syntax. You can either just add a trailing space after *, or better yet reformat the line to be a little more readable: olcAccess: {2}to * by self write by dn="cn=admin,dc=yln,dc=info" write by * readNote that each line is indented two spaces. This gets us one space for the LDIF folding, and then another literal space to separate each line from the last word on the preceding line.I had to comment out all of the olcDbConfig statements, which were unrecognized in my environment: #olcDbConfig: {0}set_cachesize 0 2097152 0 #olcDbConfig: {1}set_lk_max_objects 1500 #olcDbConfig: {2}set_lk_max_locks 1500 #olcDbConfig: {3}set_lk_max_lockers 1500With this changes, I was able to successfully slapadd ... your content.
I'm finally moving from RHEL 7 to 8. I have a new 8.6 installation and I've compiled OpenLDAP 2.5.13 and done the basic setup. As I'm moving from an existing OpenLDAP instance, I exported my LDAP settings on the old server. This new OpenLDAP uses mdb instead of hdb so I changed all instances of that in the exported ldif file. I've deleted everything in /etc/openldap/slapd.d/. When I run slapadd -n 0 -F /etc/openldap/slapd.d -l configbackup.conf -d 64 I get this config_back_db_open: No explicit ACL for back-config configured. Using hardcoded default olcBackend: value #0: <olcBackend> failed init (mdb)! slapadd: could not add entry dn="olcBackend={0}mdb,cn=config" (line=609): <olcBackend> failed initHere're the contents of configbackup.conf: dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCACertificatePath: /etc/openldap/certs olcTLSCertificateFile: /etc/openldap/certs/1d40117d24e9b169.pem olcTLSCertificateKeyFile: /etc/openldap/certs/yln.key olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: 940013a0-3521-1034-9ed9-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z olcLogLevel: 0 entryCSN: 20220824150941.487221Z#000000#000#000000 modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth modifyTimestamp: 20220824150941Zdn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema structuralObjectClass: olcSchemaConfig entryUUID: 94003cfe-3521-1034-9edc-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.719049Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: cn={0}core,cn=schema,cn=config objectClass: olcSchemaConfig cn: {0}core olcAttributeTypes: {0}( 2.5.4.2 NAME 'knowledgeInformation' DESC 'RFC2256: k nowledge information' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115. 121.1.15{32768} )<following olcAttributeTypes deleted to fit character limit>olcObjectClasses: {0}( 2.5.6.2 NAME 'country' DESC 'RFC2256: a country' SUP top STRUCTURAL MUST c MAY ( searchGuide $ description ) ) olcObjectClasses: {1}( 2.5.6.3 NAME 'locality' DESC 'RFC2256: a locality' SU P top STRUCTURAL MAY ( street $ seeAlso $ searchGuide $ st $ l $ descriptio n ) ) olcObjectClasses: {2}( 2.5.6.4 NAME 'organization' DESC 'RFC2256: an organiz ation' SUP top STRUCTURAL MUST o MAY ( userPassword $ searchGuide $ seeAlso $ businessCategory $ x121Address $ registeredAddress $ destinationIndicato r $ preferredDeliveryMethod $ telexNumber $ teletexTerminalIdentifier $ tel ephoneNumber $ internationaliSDNNumber $ facsimileTelephoneNumber $ street $ postOfficeBox $ postalCode $ postalAddress $ physicalDeliveryOfficeName $ st $ l $ description ) ) olcObjectClasses: {3}( 2.5.6.5 NAME 'organizationalUnit' DESC 'RFC2256: an o rganizational unit' SUP top STRUCTURAL MUST ou MAY ( userPassword $ searchG uide $ seeAlso $ businessCategory $ x121Address $ registeredAddress $ desti nationIndicator $ preferredDeliveryMethod $ telexNumber $ teletexTerminalId entifier $ telephoneNumber $ internationaliSDNNumber $ facsimileTelephoneNu mber $ street $ postOfficeBox $ postalCode $ postalAddress $ physicalDelive ryOfficeName $ st $ l $ description ) ) olcObjectClasses: {4}( 2.5.6.6 NAME 'person' DESC 'RFC2256: a person' SUP to p STRUCTURAL MUST ( sn $ cn ) MAY ( userPassword $ telephoneNumber $ seeAls o $ description ) ) olcObjectClasses: {5}( 2.5.6.7 NAME 'organizationalPerson' DESC 'RFC2256: an organizational person' SUP person STRUCTURAL MAY ( title $ x121Address $ r egisteredAddress $ destinationIndicator $ preferredDeliveryMethod $ telexNu mber $ teletexTerminalIdentifier $ telephoneNumber $ internationaliSDNNumbe r $ facsimileTelephoneNumber $ street $ postOfficeBox $ postalCode $ posta lAddress $ physicalDeliveryOfficeName $ ou $ st $ l ) ) olcObjectClasses: {6}( 2.5.6.8 NAME 'organizationalRole' DESC 'RFC2256: an o rganizational role' SUP top STRUCTURAL MUST cn MAY ( x121Address $ register edAddress $ destinationIndicator $ preferredDeliveryMethod $ telexNumber $ teletexTerminalIdentifier $ telephoneNumber $ internationaliSDNNumber $ fac simileTelephoneNumber $ seeAlso $ roleOccupant $ preferredDeliveryMethod $ street $ postOfficeBox $ postalCode $ postalAddress $ physicalDeliveryOffic eName $ ou $ st $ l $ description ) ) olcObjectClasses: {7}( 2.5.6.9 NAME 'groupOfNames' DESC 'RFC2256: a group of names (DNs)' SUP top STRUCTURAL MUST ( member $ cn ) MAY ( businessCategor y $ seeAlso $ owner $ ou $ o $ description ) ) olcObjectClasses: {8}( 2.5.6.10 NAME 'residentialPerson' DESC 'RFC2256: an r esidential person' SUP person STRUCTURAL MUST l MAY ( businessCategory $ x1 21Address $ registeredAddress $ destinationIndicator $ preferredDeliveryMet hod $ telexNumber $ teletexTerminalIdentifier $ telephoneNumber $ internati onaliSDNNumber $ facsimileTelephoneNumber $ preferredDeliveryMethod $ stree t $ postOfficeBox $ postalCode $ postalAddress $ physicalDeliveryOfficeName $ st $ l ) ) olcObjectClasses: {9}( 2.5.6.11 NAME 'applicationProcess' DESC 'RFC2256: an application process' SUP top STRUCTURAL MUST cn MAY ( seeAlso $ ou $ l $ de scription ) ) olcObjectClasses: {10}( 2.5.6.12 NAME 'applicationEntity' DESC 'RFC2256: an application entity' SUP top STRUCTURAL MUST ( presentationAddress $ cn ) MA Y ( supportedApplicationContext $ seeAlso $ ou $ o $ l $ description ) ) olcObjectClasses: {11}( 2.5.6.13 NAME 'dSA' DESC 'RFC2256: a directory syste m agent (a server)' SUP applicationEntity STRUCTURAL MAY knowledgeInformati on ) olcObjectClasses: {12}( 2.5.6.14 NAME 'device' DESC 'RFC2256: a device' SUP top STRUCTURAL MUST cn MAY ( serialNumber $ seeAlso $ owner $ ou $ o $ l $ description ) ) olcObjectClasses: {13}( 2.5.6.15 NAME 'strongAuthenticationUser' DESC 'RFC22 56: a strong authentication user' SUP top AUXILIARY MUST userCertificate ) olcObjectClasses: {14}( 2.5.6.16 NAME 'certificationAuthority' DESC 'RFC2256 : a certificate authority' SUP top AUXILIARY MUST ( authorityRevocationList $ certificateRevocationList $ cACertificate ) MAY crossCertificatePair ) olcObjectClasses: {15}( 2.5.6.17 NAME 'groupOfUniqueNames' DESC 'RFC2256: a group of unique names (DN and Unique Identifier)' SUP top STRUCTURAL MUST ( uniqueMember $ cn ) MAY ( businessCategory $ seeAlso $ owner $ ou $ o $ de scription ) ) olcObjectClasses: {16}( 2.5.6.18 NAME 'userSecurityInformation' DESC 'RFC225 6: a user security information' SUP top AUXILIARY MAY ( supportedAlgorithms ) ) olcObjectClasses: {17}( 2.5.6.16.2 NAME 'certificationAuthority-V2' SUP cert ificationAuthority AUXILIARY MAY ( deltaRevocationList ) ) olcObjectClasses: {18}( 2.5.6.19 NAME 'cRLDistributionPoint' SUP top STRUCTU RAL MUST ( cn ) MAY ( certificateRevocationList $ authorityRevocationList $ deltaRevocationList ) ) olcObjectClasses: {19}( 2.5.6.20 NAME 'dmd' SUP top STRUCTURAL MUST ( dmdNam e ) MAY ( userPassword $ searchGuide $ seeAlso $ businessCategory $ x121Add ress $ registeredAddress $ destinationIndicator $ preferredDeliveryMethod $ telexNumber $ teletexTerminalIdentifier $ telephoneNumber $ internationali SDNNumber $ facsimileTelephoneNumber $ street $ postOfficeBox $ postalCode $ postalAddress $ physicalDeliveryOfficeName $ st $ l $ description ) ) olcObjectClasses: {20}( 2.5.6.21 NAME 'pkiUser' DESC 'RFC2587: a PKI user' S UP top AUXILIARY MAY userCertificate ) olcObjectClasses: {21}( 2.5.6.22 NAME 'pkiCA' DESC 'RFC2587: PKI certificate authority' SUP top AUXILIARY MAY ( authorityRevocationList $ certificateRe vocationList $ cACertificate $ crossCertificatePair ) ) olcObjectClasses: {22}( 2.5.6.23 NAME 'deltaCRL' DESC 'RFC2587: PKI user' SU P top AUXILIARY MAY deltaRevocationList ) olcObjectClasses: {23}( 1.3.6.1.4.1.250.3.15 NAME 'labeledURIObject' DESC 'R FC2079: object that contains the URI attribute type' MAY ( labeledURI ) SUP top AUXILIARY ) olcObjectClasses: {24}( 0.9.2342.19200300.100.4.19 NAME 'simpleSecurityObjec t' DESC 'RFC1274: simple security object' SUP top AUXILIARY MUST userPasswo rd ) olcObjectClasses: {25}( 1.3.6.1.4.1.1466.344 NAME 'dcObject' DESC 'RFC2247: domain component object' SUP top AUXILIARY MUST dc ) olcObjectClasses: {26}( 1.3.6.1.1.3.1 NAME 'uidObject' DESC 'RFC2377: uid ob ject' SUP top AUXILIARY MUST uid ) structuralObjectClass: olcSchemaConfig entryUUID: 94005928-3521-1034-9edd-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.719768Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: cn={1}cosine,cn=schema,cn=config objectClass: olcSchemaConfig cn: {1}cosine olcAttributeTypes: {0}( 0.9.2342.19200300.100.1.2 NAME 'textEncodedORAddress ' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1. 4.1.1466.115.121.1.15{256} )<following olcAttributeTypes deleted to fit character limit>olcObjectClasses: {0}( 0.9.2342.19200300.100.4.4 NAME ( 'pilotPerson' 'newPi lotPerson' ) SUP person STRUCTURAL MAY ( userid $ textEncodedORAddress $ rf c822Mailbox $ favouriteDrink $ roomNumber $ userClass $ homeTelephoneNumber $ homePostalAddress $ secretary $ personalTitle $ preferredDeliveryMethod $ businessCategory $ janetMailbox $ otherMailbox $ mobileTelephoneNumber $ pagerTelephoneNumber $ organizationalStatus $ mailPreferenceOption $ person alSignature ) ) olcObjectClasses: {1}( 0.9.2342.19200300.100.4.5 NAME 'account' SUP top STRU CTURAL MUST userid MAY ( description $ seeAlso $ localityName $ organizatio nName $ organizationalUnitName $ host ) ) olcObjectClasses: {2}( 0.9.2342.19200300.100.4.6 NAME 'document' SUP top STR UCTURAL MUST documentIdentifier MAY ( commonName $ description $ seeAlso $ localityName $ organizationName $ organizationalUnitName $ documentTitle $ documentVersion $ documentAuthor $ documentLocation $ documentPublisher ) ) olcObjectClasses: {3}( 0.9.2342.19200300.100.4.7 NAME 'room' SUP top STRUCTU RAL MUST commonName MAY ( roomNumber $ description $ seeAlso $ telephoneNum ber ) ) olcObjectClasses: {4}( 0.9.2342.19200300.100.4.9 NAME 'documentSeries' SUP t op STRUCTURAL MUST commonName MAY ( description $ seeAlso $ telephonenumber $ localityName $ organizationName $ organizationalUnitName ) ) olcObjectClasses: {5}( 0.9.2342.19200300.100.4.13 NAME 'domain' SUP top STRU CTURAL MUST domainComponent MAY ( associatedName $ organizationName $ descr iption $ businessCategory $ seeAlso $ searchGuide $ userPassword $ locality Name $ stateOrProvinceName $ streetAddress $ physicalDeliveryOfficeName $ p ostalAddress $ postalCode $ postOfficeBox $ streetAddress $ facsimileTeleph oneNumber $ internationalISDNNumber $ telephoneNumber $ teletexTerminalIden tifier $ telexNumber $ preferredDeliveryMethod $ destinationIndicator $ reg isteredAddress $ x121Address ) ) olcObjectClasses: {6}( 0.9.2342.19200300.100.4.14 NAME 'RFC822localPart' SUP domain STRUCTURAL MAY ( commonName $ surname $ description $ seeAlso $ tel ephoneNumber $ physicalDeliveryOfficeName $ postalAddress $ postalCode $ po stOfficeBox $ streetAddress $ facsimileTelephoneNumber $ internationalISDNN umber $ telephoneNumber $ teletexTerminalIdentifier $ telexNumber $ preferr edDeliveryMethod $ destinationIndicator $ registeredAddress $ x121Address ) ) olcObjectClasses: {7}( 0.9.2342.19200300.100.4.15 NAME 'dNSDomain' SUP domai n STRUCTURAL MAY ( ARecord $ MDRecord $ MXRecord $ NSRecord $ SOARecord $ C NAMERecord ) ) olcObjectClasses: {8}( 0.9.2342.19200300.100.4.17 NAME 'domainRelatedObject' DESC 'RFC1274: an object related to an domain' SUP top AUXILIARY MUST asso ciatedDomain ) olcObjectClasses: {9}( 0.9.2342.19200300.100.4.18 NAME 'friendlyCountry' SUP country STRUCTURAL MUST friendlyCountryName ) olcObjectClasses: {10}( 0.9.2342.19200300.100.4.20 NAME 'pilotOrganization' SUP ( organization $ organizationalUnit ) STRUCTURAL MAY buildingName ) olcObjectClasses: {11}( 0.9.2342.19200300.100.4.21 NAME 'pilotDSA' SUP dsa S TRUCTURAL MAY dSAQuality ) olcObjectClasses: {12}( 0.9.2342.19200300.100.4.22 NAME 'qualityLabelledData ' SUP top AUXILIARY MUST dsaQuality MAY ( subtreeMinimumQuality $ subtreeMa ximumQuality ) ) structuralObjectClass: olcSchemaConfig entryUUID: 9400b986-3521-1034-9ede-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.722234Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: cn={2}nis,cn=schema,cn=config objectClass: olcSchemaConfig cn: {2}nis olcAttributeTypes: {0}( 1.3.6.1.1.1.1.2 NAME 'gecos' DESC 'The GECOS field; the common name' EQUALITY caseIgnoreIA5Match SUBSTR caseIgnoreIA5Substrings Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE ) olcAttributeTypes: {1}( 1.3.6.1.1.1.1.3 NAME 'homeDirectory' DESC 'The absol ute path to the home directory' EQUALITY caseExactIA5Match SYNTAX 1.3.6.1.4 .1.1466.115.121.1.26 SINGLE-VALUE ) olcAttributeTypes: {2}( 1.3.6.1.1.1.1.4 NAME 'loginShell' DESC 'The path to the login shell' EQUALITY caseExactIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121 .1.26 SINGLE-VALUE ) olcAttributeTypes: {3}( 1.3.6.1.1.1.1.5 NAME 'shadowLastChange' EQUALITY int egerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {4}( 1.3.6.1.1.1.1.6 NAME 'shadowMin' EQUALITY integerMat ch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {5}( 1.3.6.1.1.1.1.7 NAME 'shadowMax' EQUALITY integerMat ch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {6}( 1.3.6.1.1.1.1.8 NAME 'shadowWarning' EQUALITY intege rMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {7}( 1.3.6.1.1.1.1.9 NAME 'shadowInactive' EQUALITY integ erMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {8}( 1.3.6.1.1.1.1.10 NAME 'shadowExpire' EQUALITY intege rMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {9}( 1.3.6.1.1.1.1.11 NAME 'shadowFlag' EQUALITY integerM atch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {10}( 1.3.6.1.1.1.1.12 NAME 'memberUid' EQUALITY caseExac tIA5Match SUBSTR caseExactIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.12 1.1.26 ) olcAttributeTypes: {11}( 1.3.6.1.1.1.1.13 NAME 'memberNisNetgroup' EQUALITY caseExactIA5Match SUBSTR caseExactIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.146 6.115.121.1.26 ) olcAttributeTypes: {12}( 1.3.6.1.1.1.1.14 NAME 'nisNetgroupTriple' DESC 'Net group triple' SYNTAX 1.3.6.1.1.1.0.0 ) olcAttributeTypes: {13}( 1.3.6.1.1.1.1.15 NAME 'ipServicePort' EQUALITY inte gerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {14}( 1.3.6.1.1.1.1.16 NAME 'ipServiceProtocol' SUP name ) olcAttributeTypes: {15}( 1.3.6.1.1.1.1.17 NAME 'ipProtocolNumber' EQUALITY i ntegerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {16}( 1.3.6.1.1.1.1.18 NAME 'oncRpcNumber' EQUALITY integ erMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {17}( 1.3.6.1.1.1.1.19 NAME 'ipHostNumber' DESC 'IP addre ss' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26{128} ) olcAttributeTypes: {18}( 1.3.6.1.1.1.1.20 NAME 'ipNetworkNumber' DESC 'IP ne twork' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26{128 } SINGLE-VALUE ) olcAttributeTypes: {19}( 1.3.6.1.1.1.1.21 NAME 'ipNetmaskNumber' DESC 'IP ne tmask' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26{128 } SINGLE-VALUE ) olcAttributeTypes: {20}( 1.3.6.1.1.1.1.22 NAME 'macAddress' DESC 'MAC addres s' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26{128} ) olcAttributeTypes: {21}( 1.3.6.1.1.1.1.23 NAME 'bootParameter' DESC 'rpc.boo tparamd parameter' SYNTAX 1.3.6.1.1.1.0.1 ) olcAttributeTypes: {22}( 1.3.6.1.1.1.1.24 NAME 'bootFile' DESC 'Boot image n ame' EQUALITY caseExactIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) olcAttributeTypes: {23}( 1.3.6.1.1.1.1.26 NAME 'nisMapName' SUP name ) olcAttributeTypes: {24}( 1.3.6.1.1.1.1.27 NAME 'nisMapEntry' EQUALITY caseEx actIA5Match SUBSTR caseExactIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115. 121.1.26{1024} SINGLE-VALUE ) olcObjectClasses: {0}( 1.3.6.1.1.1.2.0 NAME 'posixAccount' DESC 'Abstraction of an account with POSIX attributes' SUP top AUXILIARY MUST ( cn $ uid $ u idNumber $ gidNumber $ homeDirectory ) MAY ( userPassword $ loginShell $ ge cos $ description ) ) olcObjectClasses: {1}( 1.3.6.1.1.1.2.1 NAME 'shadowAccount' DESC 'Additional attributes for shadow passwords' SUP top AUXILIARY MUST uid MAY ( userPass word $ shadowLastChange $ shadowMin $ shadowMax $ shadowWarning $ shadowIna ctive $ shadowExpire $ shadowFlag $ description ) ) olcObjectClasses: {2}( 1.3.6.1.1.1.2.2 NAME 'posixGroup' DESC 'Abstraction o f a group of accounts' SUP top STRUCTURAL MUST ( cn $ gidNumber ) MAY ( use rPassword $ memberUid $ description ) ) olcObjectClasses: {3}( 1.3.6.1.1.1.2.3 NAME 'ipService' DESC 'Abstraction an Internet Protocol service' SUP top STRUCTURAL MUST ( cn $ ipServicePort $ ipServiceProtocol ) MAY description ) olcObjectClasses: {4}( 1.3.6.1.1.1.2.4 NAME 'ipProtocol' DESC 'Abstraction o f an IP protocol' SUP top STRUCTURAL MUST ( cn $ ipProtocolNumber $ descrip tion ) MAY description ) olcObjectClasses: {5}( 1.3.6.1.1.1.2.5 NAME 'oncRpc' DESC 'Abstraction of an ONC/RPC binding' SUP top STRUCTURAL MUST ( cn $ oncRpcNumber $ description ) MAY description ) olcObjectClasses: {6}( 1.3.6.1.1.1.2.6 NAME 'ipHost' DESC 'Abstraction of a host, an IP device' SUP top AUXILIARY MUST ( cn $ ipHostNumber ) MAY ( l $ description $ manager ) ) olcObjectClasses: {7}( 1.3.6.1.1.1.2.7 NAME 'ipNetwork' DESC 'Abstraction of an IP network' SUP top STRUCTURAL MUST ( cn $ ipNetworkNumber ) MAY ( ipNe tmaskNumber $ l $ description $ manager ) ) olcObjectClasses: {8}( 1.3.6.1.1.1.2.8 NAME 'nisNetgroup' DESC 'Abstraction of a netgroup' SUP top STRUCTURAL MUST cn MAY ( nisNetgroupTriple $ memberN isNetgroup $ description ) ) olcObjectClasses: {9}( 1.3.6.1.1.1.2.9 NAME 'nisMap' DESC 'A generic abstrac tion of a NIS map' SUP top STRUCTURAL MUST nisMapName MAY description ) olcObjectClasses: {10}( 1.3.6.1.1.1.2.10 NAME 'nisObject' DESC 'An entry in a NIS map' SUP top STRUCTURAL MUST ( cn $ nisMapEntry $ nisMapName ) MAY de scription ) olcObjectClasses: {11}( 1.3.6.1.1.1.2.11 NAME 'ieee802Device' DESC 'A device with a MAC address' SUP top AUXILIARY MAY macAddress ) olcObjectClasses: {12}( 1.3.6.1.1.1.2.12 NAME 'bootableDevice' DESC 'A devic e with boot parameters' SUP top AUXILIARY MAY ( bootFile $ bootParameter ) ) structuralObjectClass: olcSchemaConfig entryUUID: 9400f87e-3521-1034-9edf-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.723847Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: cn={3}inetorgperson,cn=schema,cn=config objectClass: olcSchemaConfig cn: {3}inetorgperson olcAttributeTypes: {0}( 2.16.840.1.113730.3.1.1 NAME 'carLicense' DESC 'RFC2 798: vehicle license or registration plate' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 ) olcAttributeTypes: {1}( 2.16.840.1.113730.3.1.2 NAME 'departmentNumber' DESC 'RFC2798: identifies a department within an organization' EQUALITY caseIgn oreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1 .15 ) olcAttributeTypes: {2}( 2.16.840.1.113730.3.1.241 NAME 'displayName' DESC 'R FC2798: preferred name to be used when displaying entries' EQUALITY caseIgn oreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1 .15 SINGLE-VALUE ) olcAttributeTypes: {3}( 2.16.840.1.113730.3.1.3 NAME 'employeeNumber' DESC ' RFC2798: numerically identifies an employee within an organization' EQUALIT Y caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466. 115.121.1.15 SINGLE-VALUE ) olcAttributeTypes: {4}( 2.16.840.1.113730.3.1.4 NAME 'employeeType' DESC 'RF C2798: type of employment for a person' EQUALITY caseIgnoreMatch SUBSTR cas eIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 ) olcAttributeTypes: {5}( 0.9.2342.19200300.100.1.60 NAME 'jpegPhoto' DESC 'RF C2798: a JPEG image' SYNTAX 1.3.6.1.4.1.1466.115.121.1.28 ) olcAttributeTypes: {6}( 2.16.840.1.113730.3.1.39 NAME 'preferredLanguage' DE SC 'RFC2798: preferred written or spoken language for a person' EQUALITY ca seIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115. 121.1.15 SINGLE-VALUE ) olcAttributeTypes: {7}( 2.16.840.1.113730.3.1.40 NAME 'userSMIMECertificate' DESC 'RFC2798: PKCS#7 SignedData used to support S/MIME' SYNTAX 1.3.6.1.4. 1.1466.115.121.1.5 ) olcAttributeTypes: {8}( 2.16.840.1.113730.3.1.216 NAME 'userPKCS12' DESC 'RF C2798: personal identity information, a PKCS #12 PFX' SYNTAX 1.3.6.1.4.1.14 66.115.121.1.5 ) olcObjectClasses: {0}( 2.16.840.1.113730.3.2.2 NAME 'inetOrgPerson' DESC 'RF C2798: Internet Organizational Person' SUP organizationalPerson STRUCTURAL MAY ( audio $ businessCategory $ carLicense $ departmentNumber $ displayNam e $ employeeNumber $ employeeType $ givenName $ homePhone $ homePostalAddre ss $ initials $ jpegPhoto $ labeledURI $ mail $ manager $ mobile $ o $ page r $ photo $ roomNumber $ secretary $ uid $ userCertificate $ x500uniqueIden tifier $ preferredLanguage $ userSMIMECertificate $ userPKCS12 ) ) structuralObjectClass: olcSchemaConfig entryUUID: 9401218c-3521-1034-9ee0-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.724897Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: olcBackend={0}mdb,cn=config objectClass: olcBackendConfig olcBackend: {0}mdb structuralObjectClass: olcBackendConfig entryUUID: 940161ce-3521-1034-9ee2-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.726543Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=extern al,cn=auth manage by * break olcAccess: {1}to dn.exact="" by * read olcAccess: {2}to dn.base="cn=Subschema" by * read olcSizeLimit: 500 structuralObjectClass: olcDatabaseConfig entryUUID: 940022fa-3521-1034-9eda-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.718381Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=extern al,cn=auth manage by * break structuralObjectClass: olcDatabaseConfig entryUUID: 940033e4-3521-1034-9edb-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.718815Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459Zdn: olcDatabase={1}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {1}mdb olcDbDirectory: /var/lib/ldap olcSuffix: dc=yln,dc=info olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonym ous auth by dn="cn=admin,dc=yln,dc=info" write by * none olcAccess: {1}to dn.base="" by * read olcAccess: {2}to * by self write by dn="cn=admin,dc=yln,dc=info" write by * read olcLastMod: TRUE olcRootDN: cn=admin,dc=yln,dc=info olcRootPW:: <password hash> olcDbCheckpoint: 512 30 olcDbConfig: {0}set_cachesize 0 2097152 0 olcDbConfig: {1}set_lk_max_objects 1500 olcDbConfig: {2}set_lk_max_locks 1500 olcDbConfig: {3}set_lk_max_lockers 1500 olcDbIndex: objectClass eq structuralObjectClass: olcMdbConfig entryUUID: 94016bce-3521-1034-9ee3-875b6f3874a7 creatorsName: cn=config createTimestamp: 20150120185459Z entryCSN: 20150120185459.726800Z#000000#000#000000 modifiersName: cn=config modifyTimestamp: 20150120185459ZAdmittedly, most of this is Greek to me so I don't know what to do to troubleshoot. What might I try next? Thanks for your help!
Moving OpenLDAP to new server -- getting olcBackend error
At least in Debian (and derivatives thereof), a shared library's development files are split off into a separate binary package:If there are development files associated with a shared library, the source package needs to generate a binary development package named libraryname-dev, or if you need to support multiple development versions at a time, librarynameapiversion-dev. Installing the development package must result in installation of all the development files necessary for compiling programs against that shared library."Development files" in this context mostly means C/C++ header files, but importantly often includes a symbolic link to the shared library itselfThe development package should contain a symlink for the associated shared library without a version number. For example, the libgdbm-dev package should include a symlink from /usr/lib/libgdbm.so to libgdbm.so.3.0.0. This symlink is needed by the linker (ld) when compiling packages, as it will only look for libgdbm.so when compiling dynamically.In this case, although you already have the shared libraries liblber-2.4.so.2 liblber-2.4.so.2.10.8in /usr/lib/x86_64-linux-gnu but do not appear to have the symbolic link /usr/lib/x86_64-linux-gnu/liblber.so, which is provided by the corresponding development package libldap2-dev.
I am trying to build SimGear from the FlightGear project using the download_an_compile.sh script (which uses CMake to build the binaries). The build went fine so far, but when the script tried linking the built object file together to a library, I get tons of //usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2: warning: undefined reference to [emailprotected]_2(where ... is a different function name for each message). Now I thought I would just manually instruct CMake to link the lber library to the library being built, by adding -DCMAKE_CXX_STANDARD_LIBRARIES="-llber-2.4" to CMake's arguments. That resulted in /usr/bin/ld: -llber-2.4 could not be foundWhich is a riddle to me, because it is there: $ ls /usr/lib/x86_64-linux-gnu | grep lber liblber-2.4.so.2 liblber-2.4.so.2.10.8In fact, I should not be getting the undefined reference errors, because these functions are all there: $ nm /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 $ nm -D /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 | grep ber 0000000000005fe0 T ber_alloc 0000000000005fa0 T ber_alloc_t 0000000000006d50 T ber_bprint 0000000000007ec0 T ber_bvarray_add 0000000000007df0 T ber_bvarray_add_x 0000000000007cd0 T ber_bvarray_dup_x 0000000000007cc0 T ber_bvarray_free 0000000000007c30 T ber_bvarray_free_x 0000000000007830 T ber_bvdup 0000000000007700 T ber_bvecadd 0000000000007650 T ber_bvecadd_x 0000000000007640 T ber_bvecfree 00000000000075c0 T ber_bvecfree_x 00000000000075b0 T ber_bvfree 0000000000007570 T ber_bvfree_x 0000000000007c20 T ber_bvreplace 0000000000007b80 T ber_bvreplace_x 0000000000002c70 T ber_decode_oid 0000000000006fc0 T ber_dump 0000000000006000 T ber_dup 0000000000007820 T ber_dupbv 0000000000007710 T ber_dupbv_x 0000000000004cc0 T ber_encode_oid 0000000000006ab0 T ber_errno_addr 0000000000006a30 T ber_error_print 0000000000003a80 T ber_first_element 0000000000006250 T ber_flatten 0000000000006170 T ber_flatten2 0000000000005f90 T ber_flush 0000000000005db0 T ber_flush2 0000000000005d70 T ber_free 0000000000005d10 T ber_free_buf 00000000000038d0 T ber_get_bitstringa 0000000000003a70 T ber_get_boolean 0000000000003150 T ber_get_enum 0000000000003080 T ber_get_int 0000000000006400 T ber_get_next 0000000000003a20 T ber_get_null 0000000000007ed0 T ber_get_option 0000000000003730 T ber_get_stringa 0000000000003810 T ber_get_stringal 00000000000037a0 T ber_get_stringa_null 0000000000003160 T ber_get_stringb 00000000000031f0 T ber_get_stringbv 0000000000003650 T ber_get_stringbv_null 0000000000002e30 T ber_get_tag 0000000000006380 T ber_init 00000000000060c0 T ber_init2 0000000000006160 T ber_init_w_nullc 000000000020d168 B ber_int_errno_fn 000000000020d178 B ber_int_log_proc 000000000020d190 B ber_int_memory_fns 000000000020d1a0 B ber_int_options 0000000000009590 T ber_int_sb_close 0000000000009610 T ber_int_sb_destroy 0000000000009500 T ber_int_sb_init 0000000000009710 T ber_int_sb_read 00000000000099e0 T ber_int_sb_write 00000000000069d0 T ber_len 0000000000006f70 T ber_log_bprint 00000000000070b0 T ber_log_dump 0000000000007120 T ber_log_sos_dump 0000000000007a50 T ber_mem2bv 0000000000007950 T ber_mem2bv_x 0000000000007460 T ber_memalloc 0000000000007400 T ber_memalloc_x 00000000000074d0 T ber_memcalloc 0000000000007470 T ber_memcalloc_x 0000000000007390 T ber_memfree 0000000000007330 T ber_memfree_x 0000000000007560 T ber_memrealloc 00000000000074e0 T ber_memrealloc_x 00000000000073f0 T ber_memvfree 00000000000073a0 T ber_memvfree_x 0000000000003b00 T ber_next_element 0000000000002e80 T ber_peek_element 0000000000002fd0 T ber_peek_tag 0000000000005370 T ber_printf 00000000000069e0 T ber_ptrlen 0000000000005080 T ber_put_berval 0000000000005100 T ber_put_bitstring 0000000000005290 T ber_put_boolean 0000000000004f30 T ber_put_enum 0000000000004f50 T ber_put_int 0000000000005220 T ber_put_null 0000000000004f70 T ber_put_ostring 0000000000005350 T ber_put_seq 0000000000005360 T ber_put_set 00000000000050b0 T ber_put_string 000000000020d170 B ber_pvt_err_file 0000000000006ad0 T ber_pvt_log_output 000000000020d008 D ber_pvt_log_print 0000000000006c20 T ber_pvt_log_printf 000000000020d1e0 B ber_pvt_opt_on 0000000000008f00 T ber_pvt_sb_buf_destroy 0000000000008ee0 T ber_pvt_sb_buf_init 0000000000009180 T ber_pvt_sb_copy_out 00000000000093b0 T ber_pvt_sb_do_write 0000000000008fe0 T ber_pvt_sb_grow_buffer 00000000000094c0 T ber_pvt_socket_set_nonblock 0000000000005a20 T ber_read 0000000000005ad0 T ber_realloc 0000000000006a20 T ber_remaining 00000000000062f0 T ber_reset 00000000000069f0 T ber_rewind 0000000000003ba0 T ber_scanf 00000000000080f0 T ber_set_option 00000000000059a0 T ber_skip_data 0000000000002f90 T ber_skip_element 0000000000003020 T ber_skip_tag 0000000000008d30 T ber_sockbuf_add_io 0000000000009560 T ber_sockbuf_alloc 0000000000009800 T ber_sockbuf_ctrl 00000000000096a0 T ber_sockbuf_free 000000000020d060 D ber_sockbuf_io_debug 000000000020d0a0 D ber_sockbuf_io_fd 000000000020d0e0 D ber_sockbuf_io_readahead 000000000020d120 D ber_sockbuf_io_tcp 000000000020d020 D ber_sockbuf_io_udp 0000000000008e20 T ber_sockbuf_remove_io 0000000000007130 T ber_sos_dump 00000000000069c0 T ber_start 0000000000005310 T ber_start_seq 0000000000005330 T ber_start_set 0000000000007940 T ber_str2bv 0000000000007840 T ber_str2bv_x 0000000000007ac0 T ber_strdup 0000000000007a60 T ber_strdup_x 0000000000007b70 T ber_strndup 0000000000007b10 T ber_strndup_x 0000000000007ad0 T ber_strnlen 0000000000005c00 T ber_writeldd also shows that libldap is referencing the right liblber: $ ldd /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 | grep lber liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007f28c8bdc000)Does anyone have any ideas ? I don't … If I forgot any details, please just let me know, and I'll add them !
Weird linking issue with libldap using cmake
I figured out the right approach:Create a new user in LDAP say cn=sudoread,dc=example,dc=comcat > /tmp/tmplif <<EOF dn: cn=sudoread,dc=example,dc=com objectClass: top objectClass: person cn: sudoread sn: read userPassword: sudoread EOF $ ldapadd -H ldap://localhost -f /tmp/tmplif -D 'cn=root,dc=example,dc=com' -W $ printf "sudoread" | base64 c3Vkb3JlYWQ=Grant access to ou=sudoers,dc=example,dc=com for the above created user before granting all access.access to dn.one="ou=sudoers,dc=example,dc=com" by dn="cn=sudoread,dc=example,dc=com" read access to * by * readUse binddn and bindpw parameters in sudo-ldap.conf:$ cat >> /etc/sudo-ldap.conf <<EOF binddn cn=sudoread,dc=example,dc=com bindpw base64:c3Vkb3JlYWQ= EOFThis would create a user which can be used for querying the ldap and keeping the rest of the LDAP access to everything.
I configured sudoers ldap (with openldap as backend LDAP) using the instruction provided from the official sudoers website. (link) Also restricted /etc/sudo-ldap.conf with 600 root:root permissions so that the normal users in the machine won't be able to know the LDAP server to which they are talking. But the ldap server at the moment allow anonymous access connections to everything including sudoers OU. Is it possible in anyway to restrict the sudoers OU (say ou=sudoers,dc=example,dc=com) on the ldap server to a specific user and keep rest of the ldap structure for anonymous access ? (I couldn't figure out a proper way to do with access control) Configuration details: slapd.conf: access to dn.subtree="dc=example,dc=com" by * readsudo-ldap.conf: uri ldap://LDAP_SERVER sudoers_base ou=sudoers,dc=example,dc=comLet me know if you need further details.
Ideal way to restrict querying of sudoers ldap configuration by anonymous users
If memory is exhaustively used up by processes, to the extent which can possibly threaten the stability of the system, then the OOM killer comes into the picture. NOTE: It is the task of the OOM Killer to continue killing processes until enough memory is freed for the smooth functioning of the rest of the process that the Kernel is attempting to run. The OOM Killer has to select the best process(es) to kill. Best here refers to that process which will free up the maximum memory upon killing and is also the least important to the system. The primary goal is to kill the least number of processes that minimizes the damage done and at the same time maximizing the amount of memory freed. To facilitate this, the kernel maintains an oom_score for each of the processes. You can see the oom_score of each of the processes in the /proc filesystem under the pid directory. $ cat /proc/10292/oom_scoreThe higher the value of oom_score of any process, the higher is its likelihood of getting killed by the OOM Killer in an out-of-memory situation. How is the OOM_Score calculated?In David's patch set, the old badness() heuristics are almost entirely gone. Instead, the calculation turns into a simple question of what percentage of the available memory is being used by the process. If the system as a whole is short of memory, then "available memory" is the sum of all RAM and swap space available to the system. If instead, the OOM situation is caused by exhausting the memory allowed to a given cpuset/control group, then "available memory" is the total amount allocated to that control group. A similar calculation is made if limits imposed by a memory policy have been exceeded. In each case, the memory use of the process is deemed to be the sum of its resident set (the number of RAM pages it is using) and its swap usage. This calculation produces a percent-times-ten number as a result; a process which is using every byte of the memory available to it will have a score of 1000, while a process using no memory at all will get a score of zero. There are very few heuristic tweaks to this score, but the code does still subtract a small amount (30) from the score of root-owned processes on the notion that they are slightly more valuable than user-owned processes. One other tweak which is applied is to add the value stored in each process's oom_score_adj variable, which can be adjusted via /proc. This knob allows the adjustment of each process's attractiveness to the OOM killer in user space; setting it to -1000 will disable OOM kills entirely, while setting to +1000 is the equivalent of painting a large target on the associated process.References http://www.queryhome.com/15491/whats-happening-kernel-starting-killer-choose-which-process https://serverfault.com/a/571326
This answer explains the actions taken by the kernel when an OOM situation is encountered based on the value of sysctl vm.overcommit_memory. When overcommit_memory is set to 0 or 1, overcommit is enabled, and programs are allowed to allocate more memory than is really available. Now what happens when we run out of memory in this situation? How does the OOM killer decide which process to kill first?
How does the OOM killer decide which process to kill first?
The kernel will have logged a bunch of stuff before this happened, but most of it will probably not be in /var/log/messages, depending on how your (r)syslogd is configured. Try: grep oom /var/log/* grep total_vm /var/log/*The former should show up a bunch of times and the latter in only one or two places. That is the file you want to look at. Find the original "Out of memory" line in one of the files that also contains total_vm. Thirty second to a minute (could be more, could be less) before that line you'll find something like: kernel: foobar invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0You should also find a table somewhere between that line and the "Out of memory" line with headers like this: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj nameThis may not tell you much more than you already know, but the fields are:pid The process ID. uid User ID. tgid Thread group ID. total_vm Virtual memory use (in 4 kB pages) rss Resident memory use (in 4 kB pages) nr_ptes Page table entries swapents Swap entries oom_score_adj Usually 0; a lower number indicates the process will be less likely to die when the OOM killer is invoked.You can mostly ignore nr_ptes and swapents although I believe these are factors in determining who gets killed. This is not necessarily the process using the most memory, but it very likely is. For more about the selection process, see here. Basically, the process that ends up with the highest oom score is killed -- that's the "score" reported on the "Out of memory" line; unfortunately the other scores aren't reported but that table provides some clues in terms of factors. Again, this probably won't do much more than illuminate the obvious: the system ran out of memory and mysqld was choosen to die because killing it would release the most resources. This does not necessary mean mysqld is doing anything wrong. You can look at the table to see if anything else went way out of line at the time, but there may not be any clear culprit: the system can run out of memory simply because you misjudged or misconfigured the running processes.
The following report is thrown in my messages log: kernel: Out of memory: Kill process 9163 (mysqld) score 511 or sacrifice child kernel: Killed process 9163, UID 27, (mysqld) total-vm:2457368kB, anon-rss:816780kB, file-rss:4kBDoesn't matter if this problem is for httpd, mysqld or postfix but I am curious how can I continue debugging the problem. How can I get more info about why the PID 9163 is killed and I am not sure if linux keeps history for the terminated PIDs somewhere. If this occur in your message log file how you will troubleshoot this issue step by step? # free -m total used free shared buffers cached Mem: 1655 934 721 0 10 52 -/+ buffers/cache: 871 784 Swap: 109 6 103`
Debug out-of-memory with /var/log/messages
From source files I found oom_kill.c, the OOM Killer, after such message is written in system log, checks children of the process identified and evaluates if possible to kill one of them in place of the process itself. Here a comment extracted from source file explaining this: /* * If any of p's children has a different mm and is eligible for kill, * the one with the highest oom_badness() score is sacrificed for its * parent. This attempts to lose the minimal amount of work done while * still freeing memory. */
My computer recently ran out of memory (a not-unexpected consequence of compiling software while working with large GIS datasets). In the system log detailing how it dealt with the OOM condition is the following line: Out of memory: Kill process 7429 (java) score 259 or sacrifice childWhat is that or sacrifice child about? Surely it isn't pondering some dark ritual to keep things going?
What is the Out of Memory message: sacrifice child?
It's possible to register for a notification for when a cgroup's memory usage goes above a threshold. In principle, setting the threshold at a suitable point below the actual limit would let you send a signal or take other action. See: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
In our cluster, we are restricting our processes resources, e.g. memory (memory.limit_in_bytes). I think, in the end, this is also handled via the OOM killer in the Linux kernel (looks like it by reading the source code). Is there any way to get a signal before my process is being killed? (Just like the -notify option for SGE's qsub, which will send SIGUSR1 before the process is killed.) I read about /dev/mem_notify here but I don't have it - is there something else nowadays? I also read this which seems somewhat relevant. I want to be able to at least dump a small stack trace and maybe some other useful debug info - but maybe I can even recover by freeing some memory. One workaround I'm currently using is this small script which frequently checks if I'm close (95%) to the limit and if so, it sends the process a SIGUSR1. In Bash, I'm starting this script in background (cgroup-mem-limit-watcher.py &) so that it watches for other procs in the same cgroup and it quits automatically when the parent Bash process dies.
receive signal before process is being killed by OOM killer / cgroups
Here's what I've done to 'solve' it:Set MaxClients 7 (based on (1740.8Mb Memory on server - 900Mb for MySQL + other stuff) / 111Mb average usage per httpd process = 7.5747747747747747747747747747748)Therefore: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 7 MaxRequestsPerChild 4000 </IfModule>Disable all Apache modules except for authz_host_module, log_config_module, expires_module, deflate_module, setenvif_module, mime_module, autoindex_module, negotiation_module, dir_module, alias_module, rewrite_module, php5_module Remove the mod_ssl package since the client isn't using https:// whatsoever.I'll report back once this new configuration has been running a while to see if this solves it. Some inspiration here was borrowed from: http://www.activoinc.com/blog/2009/08/31/performance-optimized-httpd-conf-for-magento-ecommerce/ and http://www.activoinc.com/downloads/httpd.conf-magento
Having some problems with httpd (Apache/2.2.29) memory usage. Over time, memory usage in the httpd processes creep up until it's eventually at 100%. Last time I restarted httpd was about 24 hours ago. Output from free -m is: [ec2-user@www ~]$ free -m total used free shared buffers cached Mem: 1655 1415 239 0 202 424 -/+ buffers/cache: 788 866 Swap: 1023 4 1019To prove that it's definitely httpd, I restarted httpd and ran free -m again: [ec2-user@www ~]$ sudo service httpd restart Stopping httpd: [ OK ] Starting httpd: [ OK ] [ec2-user@www ~]$ free -m total used free shared buffers cached Mem: 1655 760 894 0 202 360 -/+ buffers/cache: 197 1457 Swap: 1023 4 1019So, restarting Apache takes free memory from 239 Mb to 894 Mb - which seems like a big leap. I've been going through the list of currently enabled Apache modules (there's quite a lot) and disabled/removed mod_wsgi and mod_perl (neither of which are required for this server, which is running a PHP-based web application - Magento, specifically). Based on https://servercheck.in/blog/3-small-tweaks-make-apache-fly, I've run ps aux | grep 'httpd' | awk '{print $6/1024 " MB";}' and get the following output:[root@www ~]# ps aux | grep 'httpd' | awk '{print $6/1024 " MB";}' 15.1328 MB 118.09 MB 127.449 MB 129.059 MB 117.734 MB 113.824 MB 125.062 MB 123.922 MB 119.855 MB 108.066 MB 136.23 MB 114.031 MB 113.27 MB 110.695 MB 102.113 MB 113.234 MB 186.816 MB 118.602 MB 0.835938 MBRunning the other suggested diagnosis tool for MaxClients which is ps aux | grep 'httpd' | awk '{print $6/1024;}' | awk '{avg += ($1 - avg) / NR;} END {print avg " MB";}' returns the following: [root@www ~]# ps aux | grep 'httpd' | awk '{print $6/1024;}' | awk '{avg += ($1 - avg) / NR;} END {print avg " MB";}' 110.212 MBThis server (Amazon AWS m1.small instance) has 1.7 Gb of RAM. So, therefore: Any further pointers/suggestions on how best to tweak the httpd settings or how to diagnose what exactly might be causing this?
httpd memory usage
Several modern dæmon supervision systems have a means for doing this. (Indeed, since there is a chain loading tool for the job, arguably they all have a means for doing this.)Upstart: Use oom score in the job file.oom score -500 systemd: Use the OOMScoreAdjust= setting in the service unit. You can use service unit patch files to affect pre-packaged service units.[Service]OOMScoreAdjust=-500 daemontools family: Use the oom-kill-protect tool from the nosh toolset in the run program for the service.If you are converting a system service unit, the convert-systemd-units tool will in fact convert the OOMScoreAdjust= setting into such an invocation of oom-kill-protect.#!/bin/nosh…oom-kill-protect -- -500…program argumentsAs a bonus, you can make it parameterizable:oom-kill-protect -- fromenv and set the value of the parameter in the service's environment (presumed to be read from an envdir associated with the service, here manipulated with the nosh toolset's rcctl shim): rcctl set servicename oomprotect -500Further readingJonathan de Boyne Pollard (2016). oom-kill-protect. nosh toolset. Softwares. James Hunt and Clint Byrum (2014). "oom score". Upstart Cookbook. Lennart Poettering (2013-10-07). "OOMScoreAdjust". systemd.exec. systemd manual pages. freedesktop.org. Jonathan de Boyne Pollard. rcctl. nosh toolset. Softwares. https://unix.stackexchange.com/a/409454/5132
Running some Linux servers with single or just a few vital system service daemons, I would like to adjust the OOM killer for those daemonized processes in case something odd happens. For example, today some Ubuntu server running MySQL got a killed MySQL daemon because tons of apt-checker processes were consuming all memory and the kernel thought it was a good idea to kill MySQL. I know I can adjust the score using the /proc/$(pidof mysqld)/oom_score_adj file to give the kernel some clue I don't prefer MySQL to be killed, yet that doesn't survive a restart of the service. Should I edit init/upstart scripts from the package to include these adjustments? I don't think that's a very elegant solution as I would make adjustments to files belonging to a package. Would it be possible to hook into upstart/init scripts in general and conditionally adjust it? Or would you suggest running an indefinite script like while true{ adjust_oom(); sleep 60;}?
How to set OOM killer adjustments for daemons permanently?
Additional information provided in the comments reveals that the OP is using a GUI method to create the .tar.gz file. GUI software often includes a lot more bloat than the equivalent command line equivalent software, or performs additional unnecessary tasks for the sake of some "extra" feature such as a progress bar. It wouldn't surprise me if the GUI software is trying to collect a list of all the filenames in memory. It's unnecessary to do that in order to create an archive. The dedicated tools tar and gzip are defintely designed to work with streaming input and output which means that they can deal with input and output a lot bigger than memory. If you avoid the GUI program, you can most likely generate this archive using a completely normal everyday tar invocation like this: tar czf foo.tar.gz foowhere foo is the directory that contains all your 5 million files. The other answers to this question give you a couple of additional alternative tar commands to try in case you want to split the result into multiple pieces, etc...
I have 5 million files which take up about 1TB of storage space. I need to transfer these files to a third party. What's the best way to do this? I have tried reducing the size using .tar.gz, but even though my computer has 8GB RAM, I get an "out of system memory" error. Is the best solution to snail-mail the files over?
Memory problems when compressing and transferring a large number of small files (1TB in total)
What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory - the OOM killer. Any other programs can bring the machine to an halt. Anyway, you can run a simple monitoring solution in userspace. I had the same low-memory debug/action requirement in the past, and I wrote a simple bash which did the following:monitor for a soft watermark: if memory usage is above this threshold, collect some statistics (processes, free/used memory, etc) and send a warning email; monitor for an hard watermark: if memory usage is above this threshold, collect some statistics and kill the more memory hungry (or less important) processes, then send an alert email.Such a script would be very lightweight, and it can poll the machine at small interval (ie: 15 seconds)
So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won't fail to fork() or suffer from any of the other usual OOM issues). I found the OOM killer, which I understand is useful, but which doesn't really do what I'd need to do. Ideally, if I'm running out of memory, I want to know why. I suppose I could write my own program that runs on startup and uses a fixed amount of memory, then only does stuff once it gets informed of low memory by the kernel, but that brings up its own question... Is there even a syscall to be informed of something like that? A way of saying to the kernel "hey, wake me up when we've only got 128 MB of memory left"? I searched around the web and on here but I didn't find anything fitting that description. Seems like most people use polling on a time delay, but the obvious problem with that is it makes it way less likely you'll be able to know which process(es) caused the problem.
How to trigger action on low-memory condition in Linux?
Linux does memory overcommit. That means it allows process to request more memory than really available on the system. When a program tries to malloc(), the kernel says "OK you got the memory", but don't reserve it. The memory will only be reserved when the process will write something in this space. To see the difference, you have 2 indicators: Virtual Memory and Resident Memory. Virtual is the memory requested by the process, Resident is the memory really used by the process. With this system, you may go into "overbooking", kernel grants more memory than available. Then, when your system goes on 0 byte of free memory and Swap, he must sacrifice (kill) a process to gain free memory. That's when OOM Killer goes into action. The OOM selects a process based on his memory consumption, and many other elements (parent gains 1/2 of the score of his children; if it's a root owned process, score is divided by 4, etc.. Have a look on Linux-MM.org/OOM_Killer You can influence on the OOM scoring by tunning the /proc/MySQL_PID/oom_adj file. By setting it to -17, your process will never be killed. But before doing that, you should tweak your MySQL configuration file in order to limit MySQL memory usage. Otherwise, the OOM Killer will kill other system process (like SSH, crontab, etc...) and your server will be in a very unstable state, maybe leading to data corruption which is worse than anything. Also, you may consider using more swap. [EDIT] You may also change it's overcommit behaviour via these 2 sysctls : vm.overcommit_memory vm.overcommit_ratioAs stated in Kernel Documentationovercommit_memory: This value contains a flag that enables memory overcommitment. When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory. When this flag is 1, the kernel pretends there is always enough memory until it actually runs out. When this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory. Note that user_reserve_kbytes affects this policy. This feature can be very useful because there are a lot of programs that malloc() huge amounts of memory "just-in-case" and don't use much of it. The default value is 0. See Documentation/vm/overcommit-accounting and security/commoncap.c::cap_vm_enough_memory() for more information. overcommit_ratio: When overcommit_memory is set to 2, the committed address space is not permitted to exceed swap plus this percentage of physical RAM. See above.[/EDIT]
On one of our MySQL master, OOM Killer got invoked and killed MySQL server which lead to big outage. Following is the kernel log: [2006013.230723] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0 [2006013.230733] Pid: 1319, comm: mysqld Tainted: P 2.6.32-5-amd64 #1 [2006013.230735] Call Trace: [2006013.230744] [<ffffffff810b6708>] ? oom_kill_process+0x7f/0x23f [2006013.230750] [<ffffffff8106bde2>] ? timekeeping_get_ns+0xe/0x2e [2006013.230754] [<ffffffff810b6c2c>] ? __out_of_memory+0x12a/0x141 [2006013.230757] [<ffffffff810b6d83>] ? out_of_memory+0x140/0x172 [2006013.230762] [<ffffffff810baae8>] ? __alloc_pages_nodemask+0x4ec/0x5fc [2006013.230768] [<ffffffff812fca02>] ? io_schedule+0x93/0xb7 [2006013.230773] [<ffffffff810bc051>] ? __do_page_cache_readahead+0x9b/0x1b4 [2006013.230778] [<ffffffff810652f8>] ? wake_bit_function+0x0/0x23 [2006013.230782] [<ffffffff810bc186>] ? ra_submit+0x1c/0x20 [2006013.230785] [<ffffffff810b4e53>] ? filemap_fault+0x17d/0x2f6 [2006013.230790] [<ffffffff810cae1e>] ? __do_fault+0x54/0x3c3 [2006013.230794] [<ffffffff812fce29>] ? __wait_on_bit_lock+0x76/0x84 [2006013.230798] [<ffffffff810cd172>] ? handle_mm_fault+0x3b8/0x80f [2006013.230803] [<ffffffff8103a9a0>] ? pick_next_task+0x21/0x3c [2006013.230808] [<ffffffff810168ba>] ? sched_clock+0x5/0x8 [2006013.230813] [<ffffffff81300186>] ? do_page_fault+0x2e0/0x2fc [2006013.230817] [<ffffffff812fe025>] ? page_fault+0x25/0x30This machine has 64GB RAM. Following are the mysql config variables: innodb_buffer_pool_size = 48G innodb_additional_mem_pool_size = 512M innodb_log_buffer_size = 64MExcept some of the nagios plugins and metric collection scripts, nothing else runs on this machine. Can someone help me to find out why OOM killer got invoked and how can i prevent it to get invoked in future. Is there any way I can tell OOM killer not to kill mysql server. I know we can set oom_adj value to very less for a process to prevent it from getting killed by OOM killer. But is there any other way to prevent this.
OOM Killer - killed MySQL server
Consider this scenario:You have 4GB of memory free. A faulty process allocates 3.999GB. You open a task manager to kill the runaway process. The task manager allocates 0.002GB.If the process that got killed was the last process to request memory, your task manager would get killed. Or:You have 4GB of memory free. A faulty process allocates 3.999GB. You open a task manager to kill the runaway process. The X server allocates 0.002GB to handle the task manager's window.Now your X server gets killed. It didn't cause the problem; it was just "in the wrong place at the wrong time". It happened to be the first process to allocate more memory when there was none left, but it wasn't the process that used all the memory to start with.
It is explained here: Will Linux start killing my processes without asking me if memory gets short? that the OOM-Killer can be configured via overcommit_memory and that:2 = no overcommit. Allocations fail if asking too much. 0, 1 = overcommit (heuristically or always). Kill some process(es) based on some heuristics when too much memory is actually accessed.Now, I may completely misunderstand that, but why isn't there an option (or why isn't it the default) to kill the very process that actually tries to access too much memory it allocated?
Why can't the OOM-Killer just kill the process that asks for too much?
You have the config parameters in the wrong section. If you look in your logs, you should see: Unknown lvalue 'MemoryAccounting' in section 'Unit' Unknown lvalue 'MemoryHigh' in section 'Unit' Unknown lvalue 'MemoryMax' in section 'Unit'https://www.freedesktop.org/software/systemd/man/systemd.resource-control.htmlThe resource control configuration options are configured in the [Slice], [Scope], [Service], [Socket], [Mount], or [Swap] sections, depending on the unit type.Thus you want: [Unit] Description="Start memory gobbler" After=network.target[Service] ExecStart=/data/memgoble 8388600 MemoryAccounting=true MemoryHigh=1024K MemoryMax=4096K
I'm trying to use the systemd infrastructure to kill my memory leaking service when its memory usage reaches some value. The configuration file used is this: [Unit] Description="Start memory gobbler" After=network.target MemoryAccounting=true MemoryHigh=1024K MemoryMax=4096K[Service] ExecStart=/data/memgoble 8388600systemd version is 237. However, no matter what I set in the MemoryMax the kernel would kill the process on its own terms, usually when its memory consumption reaches almost the entire physical RAM. I'm running this on an embedded system with no swap. Anyone sees an obvious error in the configuration? Perhaps there are some other settings that I'm missing.
systemd memory limit not working/example
The 1 GiB limit for Linux kernel memory in a 32-bit system is a consequence of 32-bit addressing, and it's a pretty stiff limit. It's not impossible to change, but it's there for a very good reason; changing it has consequences. Let's take the wayback machine to the early 1990s, when Linux was being created. Back in those days, we'd have arguments about whether Linux could be made to run in 2 MiB of RAM or if it really needed 4 whole MiB. Of course, the high-end snobs were all sneering at us, with their 16 MiB monster servers. What does that amusing little vignette have to do with anything? In that world, it's easy to make decisions about how to divide up the 4 GiB address space you get from simple 32-bit addressing. Some OSes just split it in half, treating the top bit of the address as the "kernel flag": addresses 0 to 231-1 had the top bit cleared, and were for user space code, and addresses 231 through 232-1 had the top bit set, and were for the kernel. You could just look at the address and tell: 0x80000000 and up, it's kernel-space, otherwise it's user-space. As PC memory sizes ballooned toward that 4 GiB memory limit, this simple 2/2 split started to become a problem. User space and kernel space both had good claims on lots of RAM, but since our purpose in having a computer is generally to run user programs, rather than to run kernels, OSes started playing around with the user/kernel divide. The 3/1 split is a common compromise. As to your question about physical vs virtual, it actually doesn't matter. Technically speaking, it's a virtual memory limit, but that's just because Linux is a VM-based OS. Installing 32 GiB of physical RAM won't change anything, nor will it help to swapon a 32 GiB swap partition. No matter what you do, a 32-bit Linux kernel will never be able to address more than 4 GiB simultaneously. (Yes, I know about PAE. Now that 64-bit OSes are finally taking over, I hope we can start forgetting that nasty hack. I don't believe it can help you in this case anyway.) The bottom line is that if you're running into the 1 GiB kernel VM limit, you can rebuild the kernel with a 2/2 split, but that directly impacts user space programs. 64-bit really is the right answer.
I have a perplexing problem. I have a library which uses sg for executing customized CDBs. There are a couple of systems which routinely have issues with memory allocation in sg. Usually, the sg driver has a hard limit of around 4mb, but we're seeing it on these few systems with ~2.3mb requests. That is, the CDBs are preparing to allocate for a 2.3mb transfer. There shouldn't be any issue here: 2.3 < 4.0. Now, the profile of the machine. It is a 64 bit CPU but runs CentOS 6.0 32-bit (I didn't build them nor do I have anything to do with this decision). The kernel version for this CentOS distro is 2.6.32. They have 16gb of RAM. Here is what the memory usage looks like on the system (though, because this error occurs during automated testing, I have not verified yet if this reflects the state when this errno is returned from sg). top - 00:54:46 up 5 days, 22:05, 1 user, load average: 0.00, 0.01, 0.21 Tasks: 297 total, 1 running, 296 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 15888480k total, 9460408k used, 6428072k free, 258280k buffers Swap: 4194296k total, 0k used, 4194296k free, 8497424k cachedI found this article from Linux Journal which is about allocating memory in the kernel. The article is dated but does seem to pertain to 2.6 (some comments about the author at the head). The article mentions that the kernel is limited to about 1gb of memory (though it's not entirely clear from the text if that 1gb each for physical and virtual or total). I'm wondering if this is an accurate statement for 2.6.32. Ultimately, I'm wondering if these systems are hitting this limit. Though this isn't really an answer to my problem, I'm wondering about the veracity of the claim for 2.6.32. So then, what is the actual limit of memory for the kernel? This may need to be a consideration for troubleshooting. Any other suggestions are welcome. What makes this so baffling is that these systems are identical to many others which do not show this same problem.
memory limit of the Linux kernel
Yes, use tr instead: tr 'a' 'b' < file.txt > output.txtsed deals in lines so a huge line will cause it problems. I expect it is declaring a variable internally to hold the line and your input exceeds the maximum size allocated to that variable. tr on the other hand deals with characters and should be able to handle arbitrarily long lines correctly.
I have a 250 MB text file, all in one line. In this file I want to replace a characters with b characters: sed -e "s/a/b/g" < one-line-250-mb.txtIt fails with: sed: couldn't re-allocate memoryIt seems to me that this kind of task could be performed inline without allocating much memory. Is there a better tool for the job, or a better way to use sed?GNU sed version 4.2.1 Ubuntu 12.04.2 LTS 1 GB RAM
Basic sed command on large one-line file: couldn't re-allocate memory
Your problem is shown in this line: [50547.483932] Normal free:1376kB min:3660kB low:4572kB high:5484kB active_anon:0kB inactive_anon:0kB active_file:227508kB inactive_file:96kB unevictable:0kB writepending:4104kB present:892920kB managed:855240kB mlocked:0kB slab_reclaimable:531548kB slab_unreclaimable:25576kB kernel_stack:1784kB pagetables:0kB bounce:0kB free_pcp:120kB local_pcp:120kB free_cma:0kBThe 2 important values are free and min. The kernel is the only thing allowed to make the system go below the min value. And when that does happen, userspace essentially freezes until it gets back above min. And if the OOM killer is enabled, it's free to start killing processes. You can use the sysctl param vm.min_free_kbytes to control this. See this article for a good explanation on the subject.
I'm running Gentoo on my server, and I've just upgraded from kernel 4.4.39 to 4.9.6, with the kernel configuration essentially unchanged. My system log is filling up with error reports such as the following: [50547.483577] ksoftirqd/0: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK) [50547.483605] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 4.9.6-gentoo-r1 #2 [50547.483608] Hardware name: /LakePort, BIOS 6.00 PG 02/20/2009 [50547.483613] f5473bd0 c13e692e c17a9870 00000000 f5473c00 c10d03a7 c17a79dc 02280020 [50547.483626] f5473c08 f5473c10 c17a9870 f5473be4 f5282d37 00000008 00000000 00000030 [50547.483638] f5473cbc c10d0769 02280020 c17a9870 00000000 f5473c34 00000000 e5dca054 [50547.483652] Call Trace: [50547.483670] [<c13e692e>] dump_stack+0x47/0x69 [50547.483679] [<c10d03a7>] warn_alloc+0xf7/0x120 [50547.483686] [<c10d0769>] __alloc_pages_nodemask+0x329/0xb40 [50547.483697] [<c1107114>] new_slab+0x2a4/0x460 [50547.483704] [<c1108e62>] ___slab_alloc.constprop.81+0x392/0x540 [50547.483713] [<c159fe11>] ? __build_skb+0x21/0x100 [50547.483721] [<c1109027>] __slab_alloc.constprop.80+0x17/0x30 [50547.483727] [<c11090c2>] kmem_cache_alloc+0x82/0xb0 [50547.483733] [<c159fe11>] ? __build_skb+0x21/0x100 [50547.483738] [<c159fe11>] __build_skb+0x21/0x100 [50547.483744] [<c159ffda>] __netdev_alloc_skb+0x9a/0xe0 [50547.483751] [<c1017774>] ? nommu_map_page+0x34/0x60 [50547.483771] [<f81f64be>] e1000_alloc_rx_buffers+0x18e/0x1f0 [e1000e] [50547.483788] [<f81f3d54>] e1000_clean_rx_irq+0x244/0x3f0 [e1000e] [50547.483804] [<f81fa176>] e1000e_poll+0x96/0x2d0 [e1000e] [50547.483810] [<c11098f1>] ? kmem_cache_free_bulk+0x1c1/0x280 [50547.483817] [<c15ad7ca>] net_rx_action+0x16a/0x270 [50547.483825] [<c1043df7>] __do_softirq+0xb7/0x1a0 [50547.483832] [<c169b108>] ? __schedule+0x138/0x510 [50547.483839] [<c1043ef8>] run_ksoftirqd+0x18/0x40 [50547.483846] [<c105c01c>] smpboot_thread_fn+0xfc/0x160 [50547.483851] [<c105bf20>] ? sort_range+0x30/0x30 [50547.483857] [<c1058ac3>] kthread+0xa3/0xc0 [50547.483863] [<c1058a20>] ? kthread_park+0x50/0x50 [50547.483868] [<c169ef43>] ret_from_fork+0x1b/0x28 [50547.483872] Mem-Info: [50547.483887] active_anon:20896 inactive_anon:4650 isolated_anon:0 active_file:120066 inactive_file:528731 isolated_file:115 unevictable:1558 dirty:2365 writeback:0 unstable:0 slab_reclaimable:135114 slab_unreclaimable:6440 mapped:16650 shmem:7338 pagetables:452 bounce:0 free:4552 free_pcp:30 free_cma:0 [50547.483899] Node 0 active_anon:83584kB inactive_anon:18600kB active_file:480264kB inactive_file:2114924kB unevictable:6232kB isolated(anon):0kB isolated(file):460kB mapped:66600kB dirty:9460kB writeback:0kB shmem:29352kB writeback_tmp:0kB unstable:0kB pages_scanned:29 all_unreclaimable? no [50547.483911] DMA free:3356kB min:68kB low:84kB high:100kB active_anon:0kB inactive_anon:0kB active_file:3360kB inactive_file:0kB unevictable:0kB writepending:16kB present:15988kB managed:15912kB mlocked:0kB slab_reclaimable:8908kB slab_unreclaimable:184kB kernel_stack:8kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [50547.483913] lowmem_reserve[]: 0 834 3265 3265 [50547.483932] Normal free:1376kB min:3660kB low:4572kB high:5484kB active_anon:0kB inactive_anon:0kB active_file:227508kB inactive_file:96kB unevictable:0kB writepending:4104kB present:892920kB managed:855240kB mlocked:0kB slab_reclaimable:531548kB slab_unreclaimable:25576kB kernel_stack:1784kB pagetables:0kB bounce:0kB free_pcp:120kB local_pcp:120kB free_cma:0kB [50547.483933] lowmem_reserve[]: 0 0 19444 19444 [50547.483951] HighMem free:13476kB min:512kB low:3176kB high:5840kB active_anon:83584kB inactive_anon:18600kB active_file:249396kB inactive_file:2114740kB unevictable:6232kB writepending:5340kB present:2488904kB managed:2488904kB mlocked:6232kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1808kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [50547.483952] lowmem_reserve[]: 0 0 0 0 [50547.483960] DMA: 17*4kB (UM) 15*8kB (U) 32*16kB (UE) 23*32kB (UME) 14*64kB (UME) 8*128kB (UM) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3356kB [50547.483989] Normal: 105*4kB (ME) 122*8kB (UM) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1396kB [50547.484013] HighMem: 2049*4kB (UM) 111*8kB (UM) 25*16kB (UM) 12*32kB (M) 8*64kB (UM) 3*128kB (M) 3*256kB (UM) 4*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 13580kB [50547.484030] 657546 total pagecache pages [50547.484030] 0 pages in swap cache [50547.484030] Swap cache stats: add 0, delete 0, find 0/0 [50547.484030] Free swap = 0kB [50547.484030] Total swap = 0kB [50547.484030] 849453 pages RAM [50547.484030] 622226 pages HighMem/MovableOnly [50547.484030] 9439 pages reservedIf I'm reading it correctly, the kernel is trying and failing to allocate a single 4KB page, despite there being 16 MB of totally free memory, and 2+ GB of disk cache that could easily be freed. Running cat /proc/buddyinfo shows that memory is badly fragmented, but fragmentation shouldn't be an issue when allocating a single page. It might be a symptom of whatever the underlying problem is, though. Any idea what's going on?
System unable to allocate memory even though memory is available
The reason the OOM-killer is not automatically called is, because the system, albeit completely slowed down and unresponsive already when close to out-of-memoryy, has not actually reached the out-of-memory situation. Oversimplified the almost full ram contains 3 type of data:kernel data, that is essential pages of essential process data (e.g. any data the process created in ram only) pages of non-essential process data (e.g. data such as the code of executables, for which there is a copy on disk/ in the filesystem, and which while being currently mapped to memory could be "reread" from disk upon usage)In a memory starved situation the linux kernel as far as I can tell it is kswapd0 kernel thread, to prevent data loss and functionality loss, cannot throw away 1. and 2. , but is at liberty to at least temporarily remove those mapped-into-memory-files data from ram that is form processes that are not currently running. While this is behaviour which involves disk-thrashing, to constantly throw away data and reread it from disk, can be seen as helpful as it avoids, or at least postpones the necessariry removing/killing of a process and the freeing-but-also-loosing of its memory, it has a high price: performance. [load pages from disk to ram with code of executable of process 1] [ run process 1 ] [evict pages with binary of process 1 from ram] [load pages from disk to ram with code of executable of process 2] [ run process 2 ] [evict pages with binary of process 2 from ram] [load pages from disk to ram with code of executable of process 3] [ run process 3 ] [evict pages with binary of process 3 from ram] .... [load pages from disk to ram with code of executable of process 1] [ run process 1 ] [evict pages with binary of process 1 from ram]is clearly IO expensive and the system is likely to become unresponsive, event though technically it has not yet run out completely of memory. From a user persepective however it seems, to be hung/frozen and the resulting unresponsive UI might not be really preferable, over simply killing the process (e.g. of a browser tab, whose memory usage might have very well been the root cause/culprit to begin with.) This is where as the question indicated the Magic SysRq key trigger to start the OOM manually seems great, as the Magic SysRq is less impacted by the unresponsiveness of the system. While there might be use-cases where it is important to preserve the processes at all (performance) costs, for a desktop, it is likely that uses would prefere the OOM-killer over the frozen UI. There is patch that claims to exempt clean mapped fs backed files from memory in such situation in this answer on stackoverflow.
I have found that when running into an out-of-memory OOM situation, my linux box UI freezes completely for a very long time. I have setup the magic-sysrq-key then using echo 1 | tee /proc/sys/kernel/sysrq and encountering a OOM->UI-unresponsive situation was able to press Alt-Sysrq-f which as dmesg log showed causes the OOM to terminate/kill a process and by this resolve the OOM situation. My question is now. Why does linux become unresponsive as much as the GUI froze, however did seem not to trigger the same OOM-Killer, which I did trigger manually via Alt-Sysrq-f key combination? Considering that in the OOM "frozen" situation the system is so unresponsive as to not even allow a timely (< 10sec) response to hitting the Ctrl-Alt-F3(switch to tty3), I would have to assume the kernel must be aware its unresponsiveness, but still did not by itself invoke the Alt-Sysrq-f OOM-Killer , why? These are some settings that might have an impact on the described behaviour. $> mount | grep memory cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) $> cat /sys/fs/cgroup/memory/memory.oom_control oom_kill_disable 0 under_oom 0 oom_kill 0which while as I understand states that the memory cgroup does not have OOM neithe activated nor disabled (evidently there must be a good reason to have the OOM_kill active and disabled, or maybe I cannot interpret correctly the output, also the under_oom 0 is somewhat unclear, still)
Why does linux out-of-memory (OOM) killer not run automatically, but works upon sysrq-key?
What the system will do with the remaining 20%?The kernel will use the remaining physical memory for its own purposes (internal structures, tables, buffers, caches, whatever). The memory overcommitment setting handle userland application virtual memory reservations, the kernel doesn't use virtual memory but physical one.Why is this parameter required in first place?The overcommit_ratio parameter is an implementation choice designed to prevent applications to reserve more virtual memory than what will reasonably be available for them in the future, i.e. when they actually access the memory (or at least try to). Setting overcommit_ratio to 50% has been considered a reasonable default value by the Linux kernel developers. It assumes the kernel won't ever need to use more than 50% of the physical RAM. Your mileage may vary, the reason why it is a tunable.Why I should not always set it to 100%?Setting it to 100% (or any "too high" value) doesn't reliably disable overcommitment because you cannot assume the kernel will use 0% (or too little) of RAM. It won't prevent applications to crash as the kernel might preempt anyway all the physical memory it demands.
If I disable memory overcommit by setting vm.overcommit_memory to 2, by default the system will allow to allocate the memory up to the dimension of swap + 50% of physical memory, as explained here. I can change the ratio by modifying vm.overcommit_ratio parameter. Let's say I set it to 80%, so 80% of physical memory may be used. My question are: what the system will do with the remaining 20%? why is this parameter required in first place? why I should not always set it to 100%?
Where the remaining memory of vm.overcommit_ratio goes?
It depends on the settings you're running with, in particular memory overcommit (/proc/sys/vm/overcommit_memory; see man 5 proc for details). If memory overcommit is disabled, the editor's (and possibly other programs attempting at the same time) attempt to allocate memory will fail. They'll get a failure result from the system call. Its up to each program to handle this, though an unfortunately common result is for the program to crash. The editor may also, for example, just refuse to open the file. If memory overcommit is enabled, then the system call requesting memory may well succeed. In that case, when the memory is actually accessed, the kernel will notice its out of memory, and kill a process to reclaim memory. That process may or may not be the editor. The choice is governed by the oom_score (the result of several kernel heuristics) and oom_score_adj (configured) of each process on the system. Those are also in that proc(5) manpage.
On my Debian VM machine with 512 MB RAM and 348 MB swap, what will happen if I open a 1GB file in an editor and get out of memory? Will it crash the system? Or if not, how will Linux handle this? Wouldn't it be wise to install Swapspace so if needed there will be created enough swap automatically and dynamically? sudo apt-get install swapspace
Out of swap - what happens?
First of all, I'd like to thank MC68020 for taking the time to look into this for me. As it happens, their answer didn't include what was really happening in this situation - but they got the bounty anyway as it's a great answer and a helpful reference for the future. I'd also like to thank Philip Couling for his answer, which also wasn't quite right but pointed me in the right direction. The problem turned out to be systemd-oomd. The problem and its solution are described here: How do I disable the systemd OOM process killer in Ubuntu 22.04? In short: systemctl disable --now systemd-oomd systemctl mask systemd-oomdAnd now I can reliably run my process to completion every time, without some systemd service killing the entire process tree with no warning.
I am using Linux 5.15 with Ubuntu 22.04. I have a process that uses a lot of memory. It requires more memory than I have RAM in my machine. The first time that I ran it, it was killed by the OOM Killer. I understand this: the system ran out of memory, the OOM Killer was triggered, my process was killed. This makes sense. I am also certain that this is what happened: I took a look at dmesg and it's all there. So I added some swap space. I don't mind if this process takes a long time to run: I won't run it often. I ran the process again. This time it ran for longer than the first time. The whole system became very laggy, in that way that systems do when they are swapping a lot. It seemed to be working... and then it died. Not only did the process die, but the shell process that was its parent died too, and the Tmux process that was its parent, and the shell process that was the Tmux process' parent, and even the GNOME terminal process that was its parent! But then the process murder stopped: no more parents died. At first, I thought the OOM Killer had been triggered again - even though there was plenty of swap space still available - and that it had chosen to kill the GNOME terminal process. But I checked dmesg and journalctl -k and there was nothing new there. There was no sign that the OOM Killer had been triggered. So, first question: is there any circumstance in which the OOM Killer can be triggered without it logging anything to the kernel ring buffer? It puzzled me that the Linux kernel seemed to have started swapping but somehow it hadn't swapped enough... or it hadn't swapped fast enough... or something. So I increased vm.swappiness. This really shouldn't affect system stability: it's just a knob to turn for performance optimization. Even with vm.swappiness set to 0 the kernel should still start swapping when the free memory in a zone drops below a critical threshold. But it kind of seemed like it had started swapping but hadn't swapped enough... so I increased vm.swappiness to 100 to encourage it to swap a bit more. Then I ran the process again. The whole system became very laggy, in that way that systems do when they are swapping a lot... until the process ran successfully to completion. So, second question: why did the kernel not use the available swap space, even when free memory had dropped below the critical threshold and there was certainly plenty of swap space available? Why did changing vm.swappiness make a difference? Update: Further testing revealed that setting vm.swappiness is not a reliable solution. I've had some failures even with vm.swappiness set to 100. It might improve the chances of the process completing successfully but I'm not sure.
Why do processes on Linux crash if they use a lot of memory, yet still less than the amount of swap space available?
OOMkiller's activities are guaranteed to be in /var/log/dmesg (at least for a time). Usually the system logger daemon will also put it in /var/log/messages by default on most distributions with which I've worked. These commands might be of help in tracking the logs down: grep oom /var/log/* grep total_vm /var/log/*This answer has more details about parsing those log entries to see exactly what is going on.
I use CentOS 7 with kernel 3.1.0 I know there is a hitman in Linux called oom killer which kills a process that uses too much memory out of available space. I want to configure it to log the activities so that I can check whether it happens or not. How can I set it up? Thanks,
How to make OOM killer log into /var/log/messages when it kills any process?
oom is currently the only thing that kills automatically.dmesgand /var/log/messages should show oom kills. If the process can handle that signal, it could log at least the kill. Normally memory hogs get killed. Perhaps more swap space can help you, if the memory is only getting allocated but is not really needed. Else: Get more RAM.
Some of my jobs are getting killed by the os for some reason. I need to investigate why this is happening. The jobs that I run don't show any error messages in their own logs, which probably indicates os killed them. Nobody else has access to the server. I'm aware of OOM killer, are there any other process killers? Where would I find logs for these things?
what process killers does linux have? [closed]
It seems wget loops causes memory overflow. The natural first suggestion is to increase again memory of your cloud instance from 1Gb to 2Gb. This solved a similar issue recently. If this is not possible or doesn't solve the problem the second solution is to run wget within 2 steps:Retrieve files list. As I see in your screenshot the files are in the directory cloud.some_domain.com/remote.php/carddav/addressbooks/your_name/. So, run wget to get the directory index: wget https://cloud.some_domain.com/remote.php/carddav/addressbooks/your_name/ This will give you an index.html file. Now you can parse it to retrieve the filenames to download: grep ".vcf" index.html | awk -F"href=" '{print $2}' | awk -F\" '{print $2}' > ALL_VCF_FILES.lst for elt in `cat ALL_VCF_FILES.lst` do wget https://cloud.some_domain.com/remote.php/carddav/addressbooks/your_name/$elt done rm ALL_VCF_FILES.lst index.html
I wanted to backup all my .vcf files from my carddav server (ownCloud). The script is very simple and is the following: $ wget -Avcf -r -np -l0 --no-check-certificate -e robots=off --user=user \ --password='password' https://cloud.domain.com/foo/carddavThe total number of .vcf files is about 400, after downloading about 70 of them, wget returns this error:original URL: http://oi40.tinypic.com/2ch9itt.jpg Which kills the process because the system is "out of memory". The system is a Debian Wheezy virtual machine, hosted on Windows 7. I tried to raise the RAM to 1024MB instead of the actual 128MB, but the problem still exists. Any suggestions on how to work around this or alternative ways to accomplish this?
Wget out of memory error kills process
Is there some way I can tell Linux to use a given swap partition for hibernation only, and not to use it for swapping during normal operation?Remove or comment the corresponding line from /etc/fstab. Example on my system $ grep swap /etc/fstab /dev/mapper/NEO--L196--vg-swap_1 none swap sw 0 0Deleted because pm-hibernate needs a swap partition "activated" to work Keep the swap activated (so leave it alone in /etc/fstab) but explicitly ask the kernel to ignore it. This is done using the sysctl parameter vm.swappiness to 0 (valid values are 0-100; higher will make the kernel swap more aggressively; the default is60). To ensure this setting is persistent over reboots, edit /etc/sysctl.conf and add a line vm.swappiness=0.
For many years I set up my Linux machines with no swap, as they had enough memory to do what I needed and I would rather a process get killed if it used too much memory, instead of growing larger and larger and quietly slowing everything down. However I found out I required swap in order to use hibernate on a laptop, so I created a swap partition and hibernate has been working fine. Recently I found the machine was going into standby rather than hibernate, and upon investigation it turned out there was not enough space in the swap partition for hibernation to take place. This was because the swap partition I thought was reserved for hibernation, was in fact being used as normal swap space. Is there some way I can tell Linux to use a given swap partition for hibernation only, and not to use it for swapping during normal operation? EDIT: Per the question below, the machine has 8GB of memory and the swap partition is also 8GB, since I only wanted it for hibernation use and not actual swap use, so any larger than the machine's memory size would've been wasted. The underlying issue is that because the 8GB swap partition is being used as additional memory, the machine can now allocate up to 16GB of memory (8GB physical + 8GB swap). It recently had 10GB in use and of course could not hibernate as that 10GB could not fit in the 8GB swap partition.
Hibernate to a swap partition without using it as actual swap space
Yes, it can, and this is explicitly accounted for in the globbing library: /* Have we run out of memory? */ if (lose) { tmplink = 0; /* Here free the strings we have got. */ while (lastlink) { /* Since we build the list in reverse order, the first N entries will be allocated with malloc, if firstmalloc is set, from lastlink to firstmalloc. */ if (firstmalloc) { if (lastlink == firstmalloc) firstmalloc = 0; tmplink = lastlink; } else tmplink = 0; free (lastlink->name); lastlink = lastlink->next; FREE (tmplink); } /* Don't call QUIT; here; let higher layers deal with it. */ return ((char **)NULL); }Every memory allocation attempt is checked for failure, and sets lose to 1 if it fails. If the shell runs out of memory, it ends up exiting (see QUIT). There’s no special handling, e.g. overflowing to disk or handling the files that have already been found. The memory requirements in themselves are small: only directory names are preserved, in a globval structure which forms a linked list, storing only a pointer to the next entry and a pointer to the string.
Can using bash's globstar (**) operator cause an out of memory error? Consider something like: for f in /**/*; do printf '%s\n' "$f"; doneWhen ** is being used to generate an enormous list of files, assuming the list is too large to fit in memory, will bash crash or does it have a mechanism to handle this? I know I've run ** on humongous numbers of files and haven't noticed a problem, so I am assuming that bash will use something like temporary files to store some of the list as it is being generated. Is that correct? Can bash's ** handle an arbitrary number of files or will it fail if the file list exceeds what can fit in memory? If it won't fail, what mechanism does it use for this? Something similar to the temp files generated by sort?
Can ** (bash's globstar) run out of memory?
To monitor/recover the control of a "unstable"/starver server, I would advise to use an hardware, or failing that a software watchdog; in Debian you can install it with: sudo apt-get install watchdogThen you edit /etc/watchdog.conf and add thresholds or tests; from the top of my head, the watchdog is also activated as such that if the kernel does not see it for a good while it reboots. e.g. if a software routine does not talk in a fixed time with /dev/watchdog0 or something similar. For instance, you can define load thresholds in /etc/watchdog.conf: max-load-1 = 40 max-load-5 = 18 max-load-15 = 12Be aware also that some boards/chipsets come with built-in watchdogs; if I am not wrong the Arm A20 is one of them. From man watchdogThe Linux kernel can reset the system if serious problems are detected. This can be implemented via special watchdog hardware, or via a slightly less reliable software-only watchdog inside the kernel. Either way, there needs to be a daemon that tells the kernel the system is working fine. If the daemon stops doing that, the system is reset. watchdog is such a daemon. It opens /dev/watchdog, and keeps writing to it often enough to keep the kernel from resetting, at least once per minute. Each write delays the reboot time another minute. After a minute of inactivity the watchdog hardware will cause the reset. In the case of the software watchdog the ability to reboot will depend on the state of the machines and interrupts. The watchdog daemon can be stopped without causing a reboot if the device /dev/watchdog is closed correctly, unless your kernel is compiled with the CONFIG_WATCHDOG_NOWAYOUT option enabled.see also Raspberry Pi and Arduino: Building Reliable Systems With WatchDog Timers
I need to run some memory heavy tests in a remote computer through SSH. Last time I did this, the computer stopped responding, and it was necessary for someone to physically reboot it. Is there a way I can set it up so that the system restarts instead of freezing if too much memory is being used? (I do have root access). The kernel version is 4.9.0.
Restart system if it runs out of memory?
oom_score_adj is inherited on fork, so you can set its initial value for new children by setting the desired value on the parent process. Thus if you’re starting the target from a shell script, echo 1000 > /proc/$$/oom_score_adjwill change the shell’s value to 1000, and any process subsequently forked by the shell will start with oom_score_adj set to 1000.
Directly set by echo 1000 >/proc/<pid>/oom_score_adj is unreliable because target program is already running , in this case maybe target program caused OOM before echo 1000 >/proc/<pid>/oom_score_adj
How set the "oom_score_adj" when(before) run target program?
This was caused by a kernel bug present in Linux kernels 4.7.0 to 4.7.4 (it's fixed by this commit in 4.7.5 and this commit in 4.8.0).
I have an ARM based server with just under 2GB of addressable memory and 4GB of swap activated: root@bang:~> free -m total used free shared buff/cache available Mem: 1976 388 48 15 1539 1487 Swap: 4095 1 4094Once the system has been up for a day or so, the OOM killer starts getting a bit aggressive and starts killing things: Aug 3 12:59:01 bang kernel: [51585.822794] dump1090 invoked oom-killer: gfp_mask=0x24040c0(GFP_KERNEL|__GFP_COMP), order=2, oom_score_adj=0 Aug 3 12:59:01 bang kernel: [51585.822851] dump1090 cpuset=/ mems_allowed=0 Aug 3 12:59:01 bang kernel: [51585.822963] CPU: 6 PID: 25989 Comm: dump1090 Tainted: G C 4.7.0-41238-g206dbde-dirty #16 Aug 3 12:59:01 bang kernel: [51585.823010] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) Aug 3 12:59:01 bang kernel: [51585.823120] [<c010e4ec>] (unwind_backtrace) from [<c010b234>] (show_stack+0x10/0x14) Aug 3 12:59:01 bang kernel: [51585.823203] [<c010b234>] (show_stack) from [<c04eff84>] (dump_stack+0x88/0x9c) Aug 3 12:59:01 bang kernel: [51585.823283] [<c04eff84>] (dump_stack) from [<c0227830>] (dump_header+0x5c/0x1b0) Aug 3 12:59:01 bang kernel: [51585.823357] [<c0227830>] (dump_header) from [<c01d1aec>] (oom_kill_process+0x328/0x494) Aug 3 12:59:01 bang kernel: [51585.823420] [<c01d1aec>] (oom_kill_process) from [<c01d1fa0>] (out_of_memory+0x2e0/0x338) Aug 3 12:59:01 bang kernel: [51585.823487] [<c01d1fa0>] (out_of_memory) from [<c01d6724>] (__alloc_pages_nodemask+0xd80/0xda0) Aug 3 12:59:01 bang kernel: [51585.823555] [<c01d6724>] (__alloc_pages_nodemask) from [<c01d6a28>] (alloc_kmem_pages+0x18/0xb0) Aug 3 12:59:01 bang kernel: [51585.823620] [<c01d6a28>] (alloc_kmem_pages) from [<c01ee7a4>] (kmalloc_order+0x10/0x20) Aug 3 12:59:01 bang kernel: [51585.823688] [<c01ee7a4>] (kmalloc_order) from [<c06435b4>] (proc_submiturb+0x60c/0xe88) Aug 3 12:59:01 bang kernel: [51585.823749] [<c06435b4>] (proc_submiturb) from [<c06446e4>] (usbdev_do_ioctl+0x8b4/0x1bfc) Aug 3 12:59:01 bang kernel: [51585.823816] [<c06446e4>] (usbdev_do_ioctl) from [<c023c74c>] (do_vfs_ioctl+0x98/0x8e4) Aug 3 12:59:01 bang kernel: [51585.823879] [<c023c74c>] (do_vfs_ioctl) from [<c023d004>] (SyS_ioctl+0x6c/0x7c) Aug 3 12:59:01 bang kernel: [51585.823948] [<c023d004>] (SyS_ioctl) from [<c0107740>] (ret_fast_syscall+0x0/0x3c) Aug 3 12:59:01 bang kernel: [51585.823987] Mem-Info: Aug 3 12:59:01 bang kernel: [51585.824073] active_anon:43846 inactive_anon:46454 isolated_anon:0 Aug 3 12:59:01 bang kernel: [51585.824073] active_file:132799 inactive_file:109909 isolated_file:19 Aug 3 12:59:01 bang kernel: [51585.824073] unevictable:1408 dirty:56 writeback:0 unstable:0 Aug 3 12:59:01 bang kernel: [51585.824073] slab_reclaimable:17104 slab_unreclaimable:6387 Aug 3 12:59:01 bang kernel: [51585.824073] mapped:13368 shmem:3582 pagetables:971 bounce:0 Aug 3 12:59:01 bang kernel: [51585.824073] free:92967 free_pcp:31 free_cma:32601 Aug 3 12:59:01 bang kernel: [51585.824216] Normal free:13240kB min:3420kB low:4272kB high:5124kB active_anon:26652kB inactive_anon:26692kB active_file:360240kB inactive_file:194904kB unevictable:1336kB isolated(anon):0kB isolated(file):76kB present:770048kB managed:736192kB mlocked:1336kB dirty:16kB writeback:0kB mapped:11600kB shmem:900kB slab_reclaimable:68416kB slab_unreclaimable:25548kB kernel_stack:3384kB pagetables:3884kB unstable:0kB bounce:0kB free_pcp:124kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Aug 3 12:59:01 bang kernel: [51585.824259] lowmem_reserve[]: 0 9040 9040 Aug 3 12:59:01 bang kernel: [51585.824442] HighMem free:358664kB min:512kB low:1864kB high:3216kB active_anon:148732kB inactive_anon:159124kB active_file:170956kB inactive_file:244732kB unevictable:4296kB isolated(anon):0kB isolated(file):0kB present:1288192kB managed:1288192kB mlocked:4296kB dirty:208kB writeback:0kB mapped:41872kB shmem:13428kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:130404kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Aug 3 12:59:01 bang kernel: [51585.824483] lowmem_reserve[]: 0 0 0 Aug 3 12:59:01 bang kernel: [51585.824592] Normal: 1300*4kB (UMEH) 525*8kB (UMEH) 11*16kB (H) 9*32kB (H) 8*64kB (H) 5*128kB (H) 3*256kB (H) 1*512kB (H) 1*1024kB (H) 0*2048kB 0*4096kB = 13320kB Aug 3 12:59:01 bang kernel: [51585.825061] HighMem: 1212*4kB (UMC) 538*8kB (UM) 160*16kB (UM) 140*32kB (UMC) 108*64kB (UMC) 34*128kB (UM) 19*256kB (UMC) 10*512kB (UM) 8*1024kB (UMC) 7*2048kB (UMC) 73*4096kB (UMC) = 358976kB Aug 3 12:59:01 bang kernel: [51585.825558] 247387 total pagecache pages Aug 3 12:59:01 bang kernel: [51585.825596] 18 pages in swap cache Aug 3 12:59:01 bang kernel: [51585.825636] Swap cache stats: add 1360, delete 1342, find 33/71 Aug 3 12:59:01 bang kernel: [51585.825672] Free swap = 4190368kB Aug 3 12:59:01 bang kernel: [51585.825705] Total swap = 4194300kB Aug 3 12:59:01 bang kernel: [51585.825739] 514560 pages RAM Aug 3 12:59:01 bang kernel: [51585.825772] 322048 pages HighMem/MovableOnly Aug 3 12:59:01 bang kernel: [51585.825804] 8464 pages reserved Aug 3 12:59:01 bang kernel: [51585.825836] 32768 pages cma reserved Aug 3 12:59:01 bang kernel: [51585.825869] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name Aug 3 12:59:01 bang kernel: [51585.825958] [ 2363] 0 2363 2724 664 8 0 13 -1000 systemd-udevd Aug 3 12:59:01 bang kernel: [51585.826019] [ 4035] 0 4035 1736 445 7 0 16 0 syslog-ng Aug 3 12:59:01 bang kernel: [51585.826073] [ 4036] 0 4036 11306 1067 15 0 38 0 syslog-ng Aug 3 12:59:01 bang kernel: [51585.826123] [ 4037] 0 4037 1149 639 7 0 0 0 log_to_sql.sh Aug 3 12:59:01 bang kernel: [51585.826173] [ 4235] 60 4235 57365 13082 62 0 881 0 mysqld Aug 3 12:59:01 bang kernel: [51585.826222] [ 4283] 107 4283 2557 1006 9 0 0 0 ulogd Aug 3 12:59:01 bang kernel: [51585.826268] [ 4698] 0 4698 899 404 5 0 0 0 pppd Aug 3 12:59:01 bang kernel: [51585.826316] [ 4762] 105 4762 1183 472 6 0 0 0 dnsmasq Aug 3 12:59:01 bang kernel: [51585.826363] [ 4970] 0 4970 1292 542 7 0 0 -1000 sshd Aug 3 12:59:01 bang kernel: [51585.826410] [ 5079] 0 5079 32467 4668 25 0 0 0 apache2 Aug 3 12:59:01 bang kernel: [51585.826457] [ 5081] 81 5081 168576 28259 140 0 0 0 apache2 Aug 3 12:59:01 bang kernel: [51585.826504] [ 5082] 81 5082 173465 34888 154 0 0 0 apache2 Aug 3 12:59:01 bang kernel: [51585.826550] [ 5211] 0 5211 594 29 5 0 0 0 atd Aug 3 12:59:01 bang kernel: [51585.826597] [ 5239] 102 5239 777 430 5 0 0 0 dbus-daemon Aug 3 12:59:01 bang kernel: [51585.826644] [ 5299] 103 5299 2665 2156 11 0 0 0 dhcpd Aug 3 12:59:01 bang kernel: [51585.826691] [ 5365] 240 5365 601 209 5 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.826738] [ 5366] 240 5366 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.826784] [ 5399] 123 5399 1874 1411 10 0 0 0 ntpd Aug 3 12:59:01 bang kernel: [51585.826830] [ 5428] 240 5428 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.826876] [ 5433] 0 5433 929 617 7 0 0 0 dovecot Aug 3 12:59:01 bang kernel: [51585.826922] [ 5443] 97 5443 700 512 6 0 0 0 anvil Aug 3 12:59:01 bang kernel: [51585.826968] [ 5444] 0 5444 733 561 5 0 0 0 log Aug 3 12:59:01 bang kernel: [51585.827015] [ 5470] 8 5470 10720 1045 14 0 0 0 exim Aug 3 12:59:01 bang kernel: [51585.827061] [ 5477] 240 5477 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.827107] [ 5497] 240 5497 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.827153] [ 5500] 0 5500 20882 2674 21 0 0 0 fail2ban-server Aug 3 12:59:01 bang kernel: [51585.827199] [ 5502] 0 5502 1677 1007 7 0 0 0 screen Aug 3 12:59:01 bang kernel: [51585.827246] [ 5503] 240 5503 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.827291] [ 5504] 0 5504 1295 804 8 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.827339] [ 5505] 0 5505 1347 704 6 0 0 0 top Aug 3 12:59:01 bang kernel: [51585.827385] [ 5506] 0 5506 842 102 5 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827431] [ 5507] 0 5507 842 100 6 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827477] [ 5510] 0 5510 1150 584 7 0 0 0 multitail.sh Aug 3 12:59:01 bang kernel: [51585.827524] [ 5519] 0 5519 2466 1794 9 0 0 0 multitail Aug 3 12:59:01 bang kernel: [51585.827572] [ 5526] 0 5526 941 651 6 0 0 0 gam_server Aug 3 12:59:01 bang kernel: [51585.827618] [ 5527] 0 5527 842 108 6 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827664] [ 5528] 0 5528 842 105 5 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827710] [ 5529] 0 5529 842 100 5 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827756] [ 5530] 0 5530 842 355 6 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827802] [ 5531] 0 5531 843 386 6 0 0 0 tail Aug 3 12:59:01 bang kernel: [51585.827848] [ 5532] 240 5532 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.827894] [ 5550] 240 5550 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.827940] [ 5622] 0 5622 615 442 5 0 0 0 rpcbind Aug 3 12:59:01 bang kernel: [51585.827986] [ 5634] 240 5634 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.828032] [ 5652] 0 5652 787 572 5 0 0 0 rpc.statd Aug 3 12:59:01 bang kernel: [51585.828078] [ 5707] 0 5707 789 46 5 0 0 0 rpc.idmapd Aug 3 12:59:01 bang kernel: [51585.828124] [ 5733] 240 5733 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.828170] [ 5747] 0 5747 856 497 5 0 0 0 rpc.mountd Aug 3 12:59:01 bang kernel: [51585.828220] [ 5804] 101 5804 562 367 6 0 0 0 radvd Aug 3 12:59:01 bang kernel: [51585.828266] [ 5805] 0 5805 562 239 6 0 0 0 radvd Aug 3 12:59:01 bang kernel: [51585.828313] [ 5839] 240 5839 601 25 4 0 0 0 distccd Aug 3 12:59:01 bang kernel: [51585.828359] [ 5860] 0 5860 1150 618 5 0 0 0 heating.sh Aug 3 12:59:01 bang kernel: [51585.828405] [ 5898] 0 5898 1007 451 6 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828451] [ 5899] 0 5899 1007 436 7 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828497] [ 5900] 0 5900 1007 419 6 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828543] [ 5901] 0 5901 1007 435 5 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828589] [ 5902] 0 5902 1007 436 6 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828779] [ 5903] 0 5903 1007 449 7 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828827] [ 5904] 0 5904 609 420 5 0 0 0 agetty Aug 3 12:59:01 bang kernel: [51585.828875] [ 6004] 0 6004 1455 921 7 0 0 0 bluetoothd Aug 3 12:59:01 bang kernel: [51585.828926] [ 6010] 0 6010 39540 7714 43 0 0 0 python2 Aug 3 12:59:01 bang kernel: [51585.828974] [ 3224] 0 3224 2247 1027 10 0 0 0 sshd Aug 3 12:59:01 bang kernel: [51585.829021] [ 3227] 1000 3227 2247 945 9 0 0 0 sshd Aug 3 12:59:01 bang kernel: [51585.829066] [ 3228] 1000 3228 1298 774 8 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.829111] [ 3236] 1000 3236 1347 645 6 0 0 0 su Aug 3 12:59:01 bang kernel: [51585.829155] [ 3238] 0 3238 1298 799 7 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.829202] [ 880] 0 880 1082 759 7 0 0 0 config Aug 3 12:59:01 bang kernel: [51585.829247] [ 1099] 106 1099 1327 1093 7 0 0 0 imap-login Aug 3 12:59:01 bang kernel: [51585.829334] [ 1111] 8 1111 1046 872 6 0 0 0 imap Aug 3 12:59:01 bang kernel: [51585.829449] [10717] 0 10717 1299 765 7 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.829564] [10784] 0 10784 2885 1232 9 0 0 0 mysql Aug 3 12:59:01 bang kernel: [51585.829701] [16321] 40 16321 32298 9969 39 0 0 0 named Aug 3 12:59:01 bang kernel: [51585.829900] [24379] 0 24379 996 411 6 0 0 0 cron Aug 3 12:59:01 bang kernel: [51585.830042] [25814] 0 25814 2270 1056 10 0 0 0 sshd Aug 3 12:59:01 bang kernel: [51585.830162] [25818] 1000 25818 2304 943 8 0 0 0 sshd Aug 3 12:59:01 bang kernel: [51585.830290] [25819] 1000 25819 1298 769 6 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.830405] [25827] 1000 25827 1347 642 7 0 0 0 su Aug 3 12:59:01 bang kernel: [51585.830505] [25828] 0 25828 1298 760 8 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.830620] [25834] 0 25834 1242 565 7 0 0 0 screen Aug 3 12:59:01 bang kernel: [51585.830753] [12903] 0 12903 1299 788 7 0 0 0 bash Aug 3 12:59:01 bang kernel: [51585.830872] [25975] 0 25975 6895 579 11 0 0 0 dump1090 Aug 3 12:59:01 bang kernel: [51585.831006] Out of memory: Kill process 5082 (apache2) score 22 or sacrifice child Aug 3 12:59:01 bang kernel: [51585.832683] Killed process 5082 (apache2) total-vm:693860kB, anon-rss:118856kB, file-rss:13300kB, shmem-rss:7396kBThe thing is, swap is hardly being used at all. Why hasn't anything been swapped out rather than invoking the OOM killer? Here's the VM details: root@bang:~> grep '' /proc/sys/vm/* /proc/sys/vm/admin_reserve_kbytes:8192 /proc/sys/vm/block_dump:0 grep: /proc/sys/vm/compact_memory: Permission denied /proc/sys/vm/compact_unevictable_allowed:1 /proc/sys/vm/dirty_background_bytes:0 /proc/sys/vm/dirty_background_ratio:10 /proc/sys/vm/dirty_bytes:0 /proc/sys/vm/dirty_expire_centisecs:3000 /proc/sys/vm/dirty_ratio:20 /proc/sys/vm/dirtytime_expire_seconds:43200 /proc/sys/vm/dirty_writeback_centisecs:500 /proc/sys/vm/drop_caches:0 /proc/sys/vm/extfrag_threshold:500 /proc/sys/vm/highmem_is_dirtyable:0 /proc/sys/vm/laptop_mode:0 /proc/sys/vm/legacy_va_layout:0 /proc/sys/vm/lowmem_reserve_ratio:32 32 /proc/sys/vm/max_map_count:65530 /proc/sys/vm/min_free_kbytes:3420 /proc/sys/vm/mmap_min_addr:4096 /proc/sys/vm/mmap_rnd_bits:8 /proc/sys/vm/nr_pdflush_threads:0 /proc/sys/vm/oom_dump_tasks:1 /proc/sys/vm/oom_kill_allocating_task:0 /proc/sys/vm/overcommit_kbytes:0 /proc/sys/vm/overcommit_memory:0 /proc/sys/vm/overcommit_ratio:50 /proc/sys/vm/page-cluster:3 /proc/sys/vm/panic_on_oom:0 /proc/sys/vm/percpu_pagelist_fraction:0 /proc/sys/vm/stat_interval:1 /proc/sys/vm/swappiness:50 /proc/sys/vm/user_reserve_kbytes:62869 /proc/sys/vm/vfs_cache_pressure:100 /proc/sys/vm/watermark_scale_factor:10Kernel is mainline 4.7 with some Exynos patches: Linux bang 4.7.0-41238-g206dbde-dirty #16 SMP PREEMPT Tue Aug 2 22:35:38 BST 2016 armv7l SAMSUNG EXYNOS (Flattened Device Tree) GNU/LinuxNow, since I built the kernel myself, it's entirely possible that I got an option wrong somewhere. Any help would be appreciated. [EDIT1]: This seems to happen when there's high I/O usage, but I haven't determined whether that's down to the cache filling or something else. [EDIT2]: It seems there's an ongoing (at this time) discussion on the kernel mailing lists about what appears to be an identical problem. I'll monitor it and report back on the outcome.
Why is the OOM killer killing processes when swap is hardly used?
Your first three commands are the culprit: :a N $!baThis reads the entire file into memory at once. The following script should only keep one segment in memory at a time: % cat test.sed #!/usr/bin/sed -nf# Append this line to the hold space. # To avoid an extra newline at the start, replace instead of append. 1h 1!H# If we find a paren at the end... /)$/{ # Bring the hold space into the pattern space g # Remove the newlines s/\n//g # Print what we have p # Delete the hold space s/.*// h } % cat test.in a b c() d() e fghi j() % ./test.sed test.in abc() d() efghij()This awk solution will print each line as it comes, so it will only have a single line in memory at a time: % awk '/)$/{print;nl=1;next}{printf "%s",$0;nl=0}END{if(!nl)print ""}' test.in abc() d() efghij()
I am currently trying to remove all newlines that are not preceded by a closing parenthesis, so I came up with this expression: sed -r -i -e ":a;N;$!ba;s/([^\)])\n/\1/g;d" reallyBigFile.logIt does the job on smaller files, but on this large file I am using (3GB), it works for a while then returns with an out of memory error: sed: Couldn't re-allocate memoryIs there any way I could do this job without running into this issue. Using sed itself is not mandatory, I just want to get it done.
Out of memory while using sed with multiline expressions on giant file
You can ask the kernel to panic on oom: sysctl vm.panic_on_oom=1or for future reboots echo "vm.panic_on_oom=1" >> /etc/sysctl.confYou can adjust a process's likeliness to be killed, but presumably you have already removed most processes, so this may not be of use. See man 5 proc for /proc/[pid]/oom_score_adj. Of course, you can test the exit code of your program. If it is 137 it was killed by SIGKILL, which an oom would do. If using rsyslogd you can match for the oom message (I don't know what shape that has) in the data stream and run a program: :msg, contains, "oom killer..." ^/bin/myprogram
I do a lot of work on in the cloud running statistical models that take up a lot of memory, usually with Ubuntu 18.04. One big headache for me is when I set up a model to run for several hours or overnight, and I check on it later to find that the processes was killed. After doing some research, it seems like this is due to something called the Out Of Memory (OOM) killer. I would like to know when the OOM Killer kills one of my processes as soon as it happens, so I don't spend a whole night paying for a cloud VM that is not even running anything. It looks like OOM events are logged in /var/log/, so I suppose I could write a cron job that periodically looks for new messages in /var/log/. But this seems like a kludge. Is there any way to set up the OOM killer so that after it kills a process, it then runs a shell script that I can configure to send me notifications?
Trigger a script when OOM Killer kills a process
There is no way to instruct OOM to ignore specific user processes. Though you can instruct it to ignore a specific process and based on that you can construct a loop which will check all processes for specific user and update it via cron or whatever way you like. Cycle itself will look something like that: while read r_pid ; do echo -16 | sudo tee /proc/$r_pid/oom_adj ; done < <(pgrep -U Yoki)you can wrap it in script and schedule to be run once per minute or any interval you like. or you can completely disable OOM with sysctl vm.overcommit_memory=2 echo "vm.overcommit_memory=2" >> /etc/sysctl.confthough it is not recommended way at all, as it might lead to unexpected behaviour such as kernel panics or system hang.
Is there anyway to exclude some users from the out-of- memory killer in Unix? On the other way, can I set priority for user?
Exclude user from OOM killer in unix
If the files are already sorted in an acceptable way, you could merge-sort them and then uniq them: sort -t_ -k2,2n -k3,3n -m -- *.txt | uniq > Unique_Position.txt... which sorts numerically on the second field (as delimited by underscores _) and if those keys are unique, by the third field. The resulting output is then piped through uniq before being redirected into the output file. Given the (short) sample input above, the results are: chr1_1_200 chr1_200_400 chr1_600_800 chr1_1000_1200If you're able to fully specify the sort fields for the lines that you want to keep, you could do it all within sort by adding the -u option: sort -t_ -k1 -k2,2n -k3,3n -m -u *.txt > Unique_Position.txtThis would preserve unique lines among the three listed fields without needing to call out to uniq (notice the addition of the -u option). These sort fields need to be match the way that the input files are sorted.
I have ~100000 files each of one with unique rows such as: File1.txt chr1_1_200 chr1_600_800 ...File2.txt chr1_600_800 chr1_1000_1200 ...File3.txt chr1_200_400 chr1_600_800 chr1_1000_1200 ... Every file has around ~30 million rows and when its time to perform the command: cat *txt | sort -u > Unique_Position.txt the system runs out of memory. How can I handle this with normal command lines in Linux?
Concatenate thousands of files already sorted and re-sort the output file quickly
I'm the author of the question above and even though full answer hasn't surfaced this far, here's the best known explanation this far:With modern Linux kernel, the Cached value of /proc/meminfo no longer describes the amount of disk cache. However, the kernel developers considered that changing this at this point is already too late.In practice, to actually measure the amount of disk cache in use, you should compute Cached - Shmem to estimate it. If you take the numbers from original question you get 15151936−14707664 (kiB) (from the output of /proc/meminfo) or 444272 (kiB), so it appears that the system actually had about 433 MiB of disk cache. In that case, it should be obvious that dropping all disk cache wouldn't free a lot of memory (even if all disk cache were dropped, the Cached field would have decreased only 3%.So the best guess is that some user mode software was using a lot of shared memory (typically tmpfs or shared memory maps) and that was causing the Cached to show high values despite the fact that system actually had very little disk cache which suggests it was close to getting into out-of-memory condition. I think Committed_AS being way more than MemTotal supports this theory.Here's a (shortened) copy of the conclusion from the above linked linux-mm thread in case the above link doesn't work in the future:Subject: Re: Why is Shmem included in Cached in /proc/meminfo? From: Vlastimil Babka @ 2021-08-30 16:05 UTC On 8/30/21 12:44 AM, Mikko Rantalainen wrote:It's not immediately obvious from fs/proc/meminfo.c function meminfo_proc_show() but the output of Cached: field seems to always include all of Shmem: field, tooHowever, if we change it now, we might create even larger confusion. People looking at the output for the first time (and IIRC also the 'free' command uses it) on a new kernel wouldn't be misled anymore. But people working with both old and new kernels will now have to take in account that it changed at some point... not good.From: Khalid Aziz @ 2021-08-30 19:38 UTC On Mon, 2021-08-30 at 20:26 +0300, Mikko Rantalainen wrote:Of course one possible solution is to keep "Cached" as is and introduce "Cache" with the real cache semantics (that is, it includes sum of (Cached - Shmem) and memory backed RAM). That way system administrators would at least see two different fields with unique values and look for the documentation.I would recommend adding new field. There is likely to be a good number of tools/scripts out there that already interpret the data from /proc/meminfo and possily take actions based upon that data. Those tools will break if we change the sense of existing data. A new field has the down side of expanding the output further but it also doesn't break existing tols.
I have a problem with my Linux machine where system now seems to run easily out of RAM (and trigger OOM Killer) when it normally can handle similar load just fine. Inspecting free -tm shows that buff/cache is eating lots of RAM. Normally this would be fine because I want to cache the disk IO but it now seems that kernel cannot release this memory even if system is going out of RAM. The system looks currently like this: total used free shared buff/cache available Mem: 31807 15550 1053 14361 15203 1707 Swap: 993 993 0 Total: 32801 16543 1053but when I try to force the cache to be released I get this: $ grep -E "^MemTotal|^Cached|^Committed_AS" /proc/meminfo MemTotal: 32570668 kB Cached: 15257208 kB Committed_AS: 47130080 kB$ time sync real 0m0.770s user 0m0.000s sys 0m0.002s$ time echo 3 | sudo tee /proc/sys/vm/drop_caches 3 real 0m3.587s user 0m0.008s sys 0m0.680s$ grep -E "^MemTotal|^Cached|^Committed_AS" /proc/meminfo MemTotal: 32570668 kB Cached: 15086932 kB Committed_AS: 47130052 kBSo writing all dirty pages to disks and dropping all caches was only able to release about 130 MB out of 15 GB cache? As you can see, I'm running pretty heavy overcommit already so I really cannot waste 15 GB of RAM for a non-working cache. Kernel slabtop also claims to use less than 600 MB: $ sudo slabtop -sc -o | head Active / Total Objects (% used) : 1825203 / 2131873 (85.6%) Active / Total Slabs (% used) : 57745 / 57745 (100.0%) Active / Total Caches (% used) : 112 / 172 (65.1%) Active / Total Size (% used) : 421975.55K / 575762.55K (73.3%) Minimum / Average / Maximum Object : 0.01K / 0.27K / 16.69K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 247219 94755 0% 0.57K 8836 28 141376K radix_tree_node 118864 118494 0% 0.69K 5168 23 82688K xfrm_state 133112 125733 0% 0.56K 4754 28 76064K ecryptfs_key_record_cache$ cat /proc/version_signature Ubuntu 5.4.0-80.90~18.04.1-lowlatency 5.4.124$ cat /proc/meminfo MemTotal: 32570668 kB MemFree: 1009224 kB MemAvailable: 0 kB Buffers: 36816 kB Cached: 15151936 kB SwapCached: 760 kB Active: 13647104 kB Inactive: 15189688 kB Active(anon): 13472248 kB Inactive(anon): 14889144 kB Active(file): 174856 kB Inactive(file): 300544 kB Unevictable: 117868 kB Mlocked: 26420 kB SwapTotal: 1017824 kB SwapFree: 696 kB Dirty: 200 kB Writeback: 0 kB AnonPages: 13765260 kB Mapped: 879960 kB Shmem: 14707664 kB KReclaimable: 263184 kB Slab: 601400 kB SReclaimable: 263184 kB SUnreclaim: 338216 kB KernelStack: 34200 kB PageTables: 198116 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 17303156 kB Committed_AS: 47106156 kB VmallocTotal: 34359738367 kB VmallocUsed: 67036 kB VmallocChunk: 0 kB Percpu: 1840 kB HardwareCorrupted: 0 kB AnonHugePages: 122880 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB FileHugePages: 0 kB FilePmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 9838288 kB DirectMap2M: 23394304 kBCan you suggest any explanation what could be causing the Cached in /proc/meminfo to take about 50% of the system RAM without ability to release it? I know that PostgreSQL shared_buffers with huge pages enabled would show up as Cached but I'm not running PostgreSQL on this machine. I see that Shmem in meminfo looks suspiciously big but how to figure out which processes are using that? I guess it could be some misbehaving program but how to query the system to figure out which process is holding that RAM? I currently have 452 processes / 2144 threads so investigating all of those manually would be a huge task. I also checked that the cause of this RAM usage is not (only?) System V shared memory: $ ipcs -m | awk 'BEGIN{ sum=0 } { sum += $5 } END{print sum}' 1137593612While total bytes reported by ipcs is big, it's still "only" 1.1 GB. I also found similar question https://askubuntu.com/questions/762717/high-shmem-memory-usage where high Shmem usage was caused by crap in tmpfs mounted directory. However, that doesn't seem to be the problem with my system either, using only 221 MB: $ df -h -B1M | grep tmpfs tmpfs 3181 3 3179 1% /run tmpfs 15904 215 15689 2% /dev/shm tmpfs 5 1 5 1% /run/lock tmpfs 15904 0 15904 0% /sys/fs/cgroup tmpfs 3181 1 3181 1% /run/user/1000 tmpfs 3181 1 3181 1% /run/user/1001I found another answer that explained that files that used to live on tmpfs system that's already been deleted but the file handle is still alive doesn't show up in df output but still eats RAM. I found out that Google Chrome wastes about 1.6 GB to deleted files that it has forgotten(?) to close: $ sudo lsof -n | grep "/dev/shm" | grep deleted | grep -o 'REG.*' | awk 'BEGIN{sum=0}{sum+=$3}END{print sum}' 1667847810(Yeah, above doesn't filter chrome but I also tested with filtering and that's pretty much just Google Chrome wasting my RAM via deleted files with open file handles.) Update: It seems that the real culprit is Shmem: 14707664 kB and 1.6 GB is explained by deleted files in tmpfs, System V shared memory explains 1.1 GB and existing files in tmpfs about 220 MB. So I'm still missing about 11.8 GB somewhere. At least with Linux kernel 5.4.124 it appears that Cached includes all of Shmem which is the explanation why echo 3 > drop_caches cannot zero the field Cached even though it does free the cache. So the real question is why Shmem is taking over 10 GB of RAM when I wasn't expecting any? Update: I checked out top and found out that fields RSan ("RES Anonymous") and RSsh ("RES Shared") pointed to thunderbird and Eclipse. Closing Thunderbird didn't release any cached memory but closing Eclipse freed 3.9 GB of Cached. I'm running Eclipse with JVM flag -Xmx4000m so it seems that JVM memory usage may appear as Cached! I'd still prefer to find a method to map memory usage to processes instead of randomly closing processes and checking if it freed any memory. Update: File systems that use tmpfs behind the scenes could also cause Shmem to increase. I tested it like this: $ df --output=used,source,fstype -B1M | grep -v '/dev/sd' | grep -v ecryptfs | tail -n +2 | awk 'BEGIN{sum=0}{sum+=$1}END{print sum}' 4664So it seems that even if I only exclude filesystems backed by real block devices (my ecryptfs is mounted on those block devices, too) I can only explain about 4.7 GB of lost memory. And 4.3 GB of that is explained by snapd created squashfs mounts which to my knowledge do not use Shmem. Update: For some people, the explanation has been GEM objects reserved by GPU driver. There doesn't seem to be any standard interface to query these but for my Intel integrated grapchics, I get following results: $ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | perl -npe 's#([0-9]+) bytes#sprintf("%.1f", $1/1024/1024)." MB"#e' 1166 shrinkable [0 free] objects, 776.8 MBXorg: 114144 objects, 815.9 MB (38268928 active, 166658048 inactive, 537980928 unbound, 0 closed) calibre-paralle: 1 objects, 0.0 MB (0 active, 0 inactive, 32768 unbound, 0 closed) Xorg: 595 objects, 1329.9 MB (0 active, 19566592 inactive, 1360146432 unbound, 0 closed) chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed) chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed) chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed) firefox: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) GLXVsyncThread: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) chrome: 1100 objects, 635.1 MB (0 active, 0 inactive, 180224 unbound, 0 closed) chrome: 1100 objects, 635.1 MB (0 active, 665772032 inactive, 180224 unbound, 0 closed) chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed) [k]contexts: 3 objects, 0.0 MB (0 active, 40960 inactive, 0 unbound, 0 closed)Those results do not sensible to me. If each of those lines were an actual memory allocation the total would be in hundreds of gigabytes! Even if I assume that the GPU driver just reports some lines multiple times, I get this: $ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | sort | uniq | perl -npe 's#([0-9]+) bytes#sprintf("%.1f", $1/1024/1024)." MB"#e'1218 shrinkable [0 free] objects, 797.6 MB calibre-paralle: 1 objects, 0.0 MB (0 active, 0 inactive, 32768 unbound, 0 closed) chrome: 1134 objects, 645.0 MB (0 active, 0 inactive, 163840 unbound, 0 closed) chrome: 1134 objects, 645.0 MB (0 active, 676122624 inactive, 163840 unbound, 0 closed) chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed) chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed) firefox: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) GLXVsyncThread: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) [k]contexts: 2 objects, 0.0 MB (0 active, 24576 inactive, 0 unbound, 0 closed) Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed) Xorg: 114162 objects, 826.8 MB (0 active, 216350720 inactive, 537980928 unbound, 0 closed) Xorg: 594 objects, 1329.8 MB (14794752 active, 4739072 inactive, 1360146432 unbound, 0 closed)That's still way over the expected total numbers in range 4-8 GB. (The system has currently two seats logged in so I'm expecting to see two Xorg processes.) Update: Looking the GPU debug output a bit more, I now think that those unbound numbers mean virtual blocks without actual RAM used. If I do this I get more sensible numbers for GPU memory usage: $ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | perl -npe 's#^(.*?): .*?([0-9]+) bytes.*?([0-9]+) unbound.*#sprintf("%s: %.1f", $1, ($2-$3)/1024/1024)." MB"#eg' | grep -v '0.0 MB' 1292 shrinkable [0 free] objects, 848957440 bytesXorg: 303.1 MB Xorg: 32.7 MB chrome: 667.5 MB chrome: 667.5 MBThat could explain about 1.5 GB of RAM which seems normal for the data I'm handling. I'm still missing multiple gigabytes to somewhere! Update: I'm currently thinking that the problem is actually caused by deleted files backed by RAM. These could be caused by broken software that leaks open file handle after deleting/discarding the file. When I run $ sudo lsof -n | grep -Ev ' /home/| /tmp/| /lib/| /usr/' | grep deleted | grep -o " REG .*" | awk 'BEGIN{sum=0}{sum+=$3}END{print sum / 1024 / 1024 " MB"}' 4560.65 MB(The manually collected list of path prefixes are actually backed by real block devices - since my root is backed by real block device, I cannot just list all the block mount points here. A more clever script could list all non-mount-point directories in root and also list all block mounts longer than just / here.) This explains nearly 4.6 GB of lost RAM. Combined with the output from ipcs, GPU RAM (with the assumption about unbound memory) and tmpfs usage I'm still currently missing about 4 GB Shmem somewhere!
How to diagnose high `Shmem`? (was: Why `echo 3 > drop_caches` cannot zero the cache?)
It turned out to be a bug in gmake 3.81. When I ran the compile command directly without make, it was able to use much more memory. It seems there was a known bug in 3.80: Something like this. That bug was supposed to be fixed in 3.81. but I was getting a very similar error. So I tried gmake 3.82. The compile proceeded, and I haven't seen the VM error again. I was never able to get it to dump core, so I actually don't know what was running out of virtual memory, gmake, g++, or as. It just wouldn't dump core on that error. Nor do I know what the bug really was, but it seems to be working now.
Our group is all programmers and exclusively use Linux or MacOS, but a customer uses Solaris 10, and we need our code to work there. So we scrounged up an old SunFire V240, and a rented Solaris 10 VM to test on. The code compiles just fine on the VM, but on the SunFire it fails. Our code has a giant autogenerated C++ file as part of the build. It's this huge file that fails to compile. It fails with the message: virtual memory exhausted: Not enough space I can't figure it out. The SunFire has 8GBs of RAM, and the virtual memory exhaustion happens when the compile reaches just over 1.2GB. Nothing else significant is running. Here are some memory stats near failure: Using prstat -s size: SIZE (virtual memory): 1245 MB RSS (real memory): 1200 MBAccording to echo "::memstat" | mdb -k, lots of memory is still free: Free (cachelist) is 46% Free (freelist) is 26% of total.All user processes are using about 17% of RAM just before the compile fails. (After the failure, user RAM usage goes down to 2%.) Which is agrees with the other RAM usage numbers. (1.2GB /8.0GB ~= 15%) swap -l reports that the swap is completely unused. Some other details: We're building with g++ 6.1.0, compiled for 64 bit. It fails if we pass the -m64 flag to the compiler or not. # uname -a SunOS servername 5.10 Generic_147440-27 sun4u sparc SUNW,Sun-Fire-V240Both the VM and the SunFire have system limits set like this: >ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 256 pipe size (512 bytes, -p) 10 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29995 virtual memory (kbytes, -v) unlimited(using su)>rctladm -l ... process.max-address-space syslog=off [ lowerable deny no-signal bytes ] process.max-file-descriptor syslog=off [ lowerable deny count ] process.max-core-size syslog=off [ lowerable deny no-signal bytes ] process.max-stack-size syslog=off [ lowerable deny no-signal bytes ] process.max-data-size syslog=off [ lowerable deny no-signal bytes ] process.max-file-size syslog=off [ lowerable deny file-size bytes ] process.max-cpu-time syslog=off [ lowerable no-deny cpu-time inf seconds ] ...We've tried setting the stack size to "unlimited" but that doesn't make any identifiable difference. # df / (/dev/dsk/c1t0d0s0 ):86262876 blocks 7819495 files /devices (/devices ): 0 blocks 0 files /system/contract (ctfs ): 0 blocks 2147483608 files /proc (proc ): 0 blocks 29937 files /etc/mnttab (mnttab ): 0 blocks 0 files /etc/svc/volatile (swap ):14661104 blocks 1180179 files /system/object (objfs ): 0 blocks 2147483465 files /etc/dfs/sharetab (sharefs ): 0 blocks 2147483646 files /platform/sun4u-us3/lib/libc_psr.so.1(/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1):86262876 blocks 7819495 files /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1(/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1):86262876 blocks 7819495 files /dev/fd (fd ): 0 blocks 0 files /tmp (swap ):14661104 blocks 1180179 files /var/run (swap ):14661104 blocks 1180179 files /home (/dev/dsk/c1t1d0s0 ):110125666 blocks 8388083 filesEdit 1: swap output after setting up 16GB swap file: Note: block size is 512 # swap -l swapfile dev swaplo blocks free /dev/dsk/c1t0d0s1 32,25 16 2106416 2106416 /home/me/tmp/swapfile - 16 32964592 32964592# swap -s total: 172096k bytes allocated + 52576k reserved = 224672k used, 23875344k available
Solaris 10: Virtual Memory Exhausted
These are migration types, defined in mm/page_alloc.c in the kernel: static const char types[MIGRATE_TYPES] = { [MIGRATE_UNMOVABLE] = 'U', [MIGRATE_MOVABLE] = 'M', [MIGRATE_RECLAIMABLE] = 'E', [MIGRATE_HIGHATOMIC] = 'H', #ifdef CONFIG_CMA [MIGRATE_CMA] = 'C', #endif #ifdef CONFIG_MEMORY_ISOLATION [MIGRATE_ISOLATE] = 'I', #endif };The types themselves are defined in include/linux/mmzone.h. So E means reclaimable, and H means “high atomic”, i.e. “high-order atomic allocation”.
When OOM Killer or kernel reports memory state, it uses the next abbreviations Node 0 DMA: 26*4kB (M) 53*8kB (UM) 33*16kB (ME) 23*32kB (UME) 6*64kB (ME) 7*128kB (UME) 1*256kB (M) 2*512kB (ME) 0*1024kB 0*2048kB 0*4096kB = 4352kB Node 0 DMA32: 803*4kB (UME) 3701*8kB (UMEH) 830*16kB (UMH) 2*32kB (H) 0*64kB 0*128kB 1*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 46420kBI understand some of them, for example M - movable UMH - unmovable high. But I can not find what means E Where I can find documentation about it? My case, I have the next message page allocation stalls for 27840ms, order:0, mode:0x14200ca(GFP_HIGHUSER_MOVABLE)what means process requests 4kb page (2^0 * 4kb), that should be coded as (MH) Am I right? Or HIGHUSER is coded in different way?
What do the abbreviations in OOM Killer memory statistics report mean?
Memory cgroup out of memoryYou need to avoid filling the memory cgroup that you are running within. Task in /slurm/uid_11122/job_58003653/step_0 killed as a result of limit of /slurm/uid_11122/job_58003653 memory: usage 8,388,608kB, limit 8,388,608kB, failcnt 3673 memory+swap: usage 8388608kB, limit 16777216kB, failcnt 0 kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 Memory cgroup stats for /slurm/uid_11122/job_58003653: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB Memory cgroup stats for /slurm/uid_11122/job_58003653/step_extern: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB Memory cgroup stats for /slurm/uid_11122/job_58003653/step_batch: cache:0KB rss:4452KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4452KB inactive_file:0KB active_file:0KB unevictable:0KB Memory cgroup stats for /slurm/uid_11122/job_58003653/step_0: cache:6,399,032KB rss:1,985,124KB rss_huge:1,476,608KB mapped_file:2,0232KB swap:0KB inactive_anon:1,890,552KB active_anon:6,491,116KB inactive_file:1,216KB active_file:892KB unevictable:0KBIt looks like you have ~ 6.4GB in "shmem", which usually means a tmpfs. (Some other types of shmem are sysv IPC shared memory as shown by ipcs, or a memfd...). Combined with ~ 2GB RSS, that puts you over the 8.4GB limit for your cgroup. "shmem" is not mentioned in the messages, but I infer it from the ~ 6.4GB which is shown in both "cache" and "active_anon".cache - page cache, including tmpfs (shmem), in bytes active_anon - anonymous and swap cache on active least-recently-used (LRU) list, including tmpfs (shmem), in byteshttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-memoryWhen a cgroup goes over its limit, we first try to reclaim memory from the cgroup so as to make space for the new pages that the cgroup has touched. If the reclaim is unsuccessful, an OOM routine is invoked to select and kill the bulkiest task in the cgroup.https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txtControl groups, usually referred to as cgroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups whose usage of various types of resources can then be limited and monitored. The kernel's cgroup interface is provided through a pseudo-filesystem called cgroupfs. Grouping is implemented in the core cgroup kernel code, while resource tracking and limits are implemented in a set of per-resource-type subsystems (memory, CPU, and so on).http://man7.org/linux/man-pages/man7/cgroups.7.html
I am running a complex workflow via bash scripts, which are using external programs/command to do different things. It runs fine for several hours, but then suddenly the OOM killer terminates programs of my workflow or the entire bash scripts, even though there is still plenty of memory available. I have logged the memory usage every 0.01 seconds with the ps command, there is no increase or change at all, and still several GB available. But suddenly from one memory snapshot to the next some process gets terminated by the OOM killer. Here is a typical ps snapshot of the memory usage: PID %MEM RSS VSZ COMMAND USER 139443 1.2 1651768 8622936 java jadzia 123601 0.1 163352 523068 obabel jadzia 139355 0.0 5488 253120 srun jadzia 125747 0.0 5252 365088 obabel jadzia 125757 0.0 5252 365088 obabel jadzia 125388 0.0 5224 365088 obabel jadzia 125824 0.0 3764 267736 obabel jadzia 21062 0.0 3724 128628 bash jadzia 125778 0.0 3628 267736 obabel jadzia 127018 0.0 1904 113416 bash jadzia 126127 0.0 1812 161476 ps jadzia 139526 0.0 1740 10288 one-step.sh jadzia 139508 0.0 1736 10252 one-step.sh jadzia 139473 0.0 1728 10256 one-step.sh jadzia 139477 0.0 1728 10252 one-step.sh jadzia 139558 0.0 1724 10252 one-step.sh jadzia 139585 0.0 1724 10252 one-step.sh jadzia 139539 0.0 1704 10292 one-step.sh jadzia 139370 0.0 1688 9676 one-step.sh jadzia 139485 0.0 1688 10200 one-step.sh jadzia 125742 0.0 1544 10252 one-step.sh jadzia 125752 0.0 1532 10252 one-step.sh jadzia 125772 0.0 1532 10256 one-step.sh jadzia 125819 0.0 1532 10252 one-step.sh jadzia 125363 0.0 1508 10292 one-step.sh jadzia 123586 0.0 1496 10200 one-step.sh jadzia 139357 0.0 860 48364 srun jadzia 104975 0.0 724 6448 ng jadzia 91240 0.0 720 6448 ng jadziaThe RSS sum over all processes always stay below 3GB and never spikes. When looking at the dmesg output, the entries show that it are different programs which invoke the oom-killer: from external binary programs such as obabel to the "tr" utility or the bash script which executes the commands itself. Here are two different dmesg examples which show the oom events: [Thu Nov 1 15:15:27 2018] tr invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0 [Thu Nov 1 15:15:27 2018] tr cpuset=step_0 mems_allowed=0-1 [Thu Nov 1 15:15:27 2018] CPU: 27 PID: 33591 Comm: tr Tainted: G OE ------------ 3.10.0-693.21.1.el7.x86_64 #1 [Thu Nov 1 15:15:27 2018] Hardware name: Dell Inc. PowerEdge M630/0R10KJ, BIOS 2.5.4 08/17/2017 [Thu Nov 1 15:15:27 2018] Call Trace: [Thu Nov 1 15:15:27 2018] [<ffffffff816ae7c8>] dump_stack+0x19/0x1b [Thu Nov 1 15:15:27 2018] [<ffffffff816a9b90>] dump_header+0x90/0x229 [Thu Nov 1 15:15:27 2018] [<ffffffff810c7c82>] ? default_wake_function+0x12/0x20 [Thu Nov 1 15:15:27 2018] [<ffffffff8118a3d6>] ? find_lock_task_mm+0x56/0xc0 [Thu Nov 1 15:15:27 2018] [<ffffffff811f5fb8>] ? try_get_mem_cgroup_from_mm+0x28/0x60 [Thu Nov 1 15:15:27 2018] [<ffffffff8118a884>] oom_kill_process+0x254/0x3d0 [Thu Nov 1 15:15:27 2018] [<ffffffff811f9cd6>] mem_cgroup_oom_synchronize+0x546/0x570 [Thu Nov 1 15:15:27 2018] [<ffffffff811f9150>] ? mem_cgroup_charge_common+0xc0/0xc0 [Thu Nov 1 15:15:27 2018] [<ffffffff8118b114>] pagefault_out_of_memory+0x14/0x90 [Thu Nov 1 15:15:27 2018] [<ffffffff816a7f2e>] mm_fault_error+0x68/0x12b [Thu Nov 1 15:15:27 2018] [<ffffffff816bb741>] __do_page_fault+0x391/0x450 [Thu Nov 1 15:15:27 2018] [<ffffffff816bb835>] do_page_fault+0x35/0x90 [Thu Nov 1 15:15:27 2018] [<ffffffff816b7768>] page_fault+0x28/0x30 [Thu Nov 1 15:15:27 2018] Task in /slurm/uid_11122/job_58003653/step_0 killed as a result of limit of /slurm/uid_11122/job_58003653 [Thu Nov 1 15:15:27 2018] memory: usage 8388608kB, limit 8388608kB, failcnt 3673 [Thu Nov 1 15:15:27 2018] memory+swap: usage 8388608kB, limit 16777216kB, failcnt 0 [Thu Nov 1 15:15:27 2018] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 [Thu Nov 1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB [Thu Nov 1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653/step_extern: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB [Thu Nov 1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653/step_batch: cache:0KB rss:4452KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4452KB inactive_file:0KB active_file:0KB unevictable:0KB [Thu Nov 1 15:15:27 2018] Memory cgroup stats for /slurm/uid_11122/job_58003653/step_0: cache:6399032KB rss:1985124KB rss_huge:1476608KB mapped_file:20232KB swap:0KB inactive_anon:1890552KB active_anon:6491116KB inactive_file:1216KB active_file:892KB unevictable:0KB [Thu Nov 1 15:15:27 2018] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name [Thu Nov 1 15:15:27 2018] [20087] 11122 20087 28321 420 12 0 0 bash [Thu Nov 1 15:15:27 2018] [33058] 11122 33058 63274 1357 31 0 0 srun [Thu Nov 1 15:15:27 2018] [33060] 11122 33060 12085 207 23 0 0 srun [Thu Nov 1 15:15:27 2018] [33073] 11122 33073 2416 406 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33153] 11122 33153 3735255 498759 1385 0 0 java [Thu Nov 1 15:15:27 2018] [42230] 11122 42230 2542 422 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42240] 11122 42240 2543 421 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42261] 11122 42261 2542 421 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42285] 11122 42285 2541 422 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42302] 11122 42302 2543 422 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42316] 11122 42316 2542 422 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42331] 11122 42331 2564 424 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [42359] 11122 42359 2544 421 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33529] 11122 33529 2148 167 10 0 0 timeout [Thu Nov 1 15:15:27 2018] [33538] 11122 33538 1078 88 7 0 0 time_bin [Thu Nov 1 15:15:27 2018] [33540] 11122 33540 2148 167 10 0 0 timeout [Thu Nov 1 15:15:27 2018] [33541] 11122 33541 2148 166 10 0 0 timeout [Thu Nov 1 15:15:27 2018] [33542] 11122 33542 1609 177 8 0 0 ng [Thu Nov 1 15:15:27 2018] [33543] 11122 33543 1090 89 8 0 0 tail [Thu Nov 1 15:15:27 2018] [33544] 11122 33544 2472 181 11 0 0 awk [Thu Nov 1 15:15:27 2018] [33546] 11122 33546 1078 88 8 0 0 time_bin [Thu Nov 1 15:15:27 2018] [33554] 11122 33554 1078 88 8 0 0 time_bin [Thu Nov 1 15:15:27 2018] [33556] 11122 33556 1609 177 10 0 0 ng [Thu Nov 1 15:15:27 2018] [33562] 11122 33562 1609 177 9 0 0 ng [Thu Nov 1 15:15:27 2018] [33570] 11122 33570 9084 299 18 0 0 tar [Thu Nov 1 15:15:27 2018] [33586] 11122 33586 2564 333 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33587] 11122 33587 2148 166 10 0 0 timeout [Thu Nov 1 15:15:27 2018] [33588] 11122 33588 2564 279 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33589] 11122 33589 1078 89 8 0 0 time_bin [Thu Nov 1 15:15:27 2018] [33590] 11122 33590 2472 181 10 0 0 awk [Thu Nov 1 15:15:27 2018] [33591] 11122 33591 1075 48 6 0 0 tr [Thu Nov 1 15:15:27 2018] [33592] 11122 33592 2564 243 8 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33593] 11122 33593 2542 330 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33594] 11122 33594 1609 177 9 0 0 ng [Thu Nov 1 15:15:27 2018] [33595] 11122 33595 2542 318 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33596] 11122 33596 2542 240 8 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33597] 11122 33597 2542 240 9 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33598] 11122 33598 2542 240 8 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33599] 11122 33599 2542 240 8 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] [33600] 11122 33600 2542 240 8 0 0 one-step.sh [Thu Nov 1 15:15:27 2018] Memory cgroup out of memory: Kill process 33576 (java) score 238 or sacrifice child [Thu Nov 1 15:15:27 2018] Killed process 33153 (java) total-vm:14941020kB, anon-rss:1973844kB, file-rss:1008kB, shmem-rss:20184kB[Thu Nov 1 03:40:17 2018] obabel invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0 [Thu Nov 1 03:40:17 2018] obabel cpuset=step_0 mems_allowed=0-1 [Thu Nov 1 03:40:17 2018] CPU: 29 PID: 123601 Comm: obabel Tainted: G OE ------------ T 3.10.0-693.21.1.el7.x86_64 #1 [Thu Nov 1 03:40:17 2018] Hardware name: Dell Inc. PowerEdge M630/0R10KJ, BIOS 2.5.4 08/17/2017 [Thu Nov 1 03:40:17 2018] Call Trace: [Thu Nov 1 03:40:17 2018] [<ffffffff816ae7c8>] dump_stack+0x19/0x1b [Thu Nov 1 03:40:17 2018] [<ffffffff816a9b90>] dump_header+0x90/0x229 [Thu Nov 1 03:40:17 2018] [<ffffffff810c7c82>] ? default_wake_function+0x12/0x20 [Thu Nov 1 03:40:17 2018] [<ffffffff8118a3d6>] ? find_lock_task_mm+0x56/0xc0 [Thu Nov 1 03:40:17 2018] [<ffffffff811f5fb8>] ? try_get_mem_cgroup_from_mm+0x28/0x60 [Thu Nov 1 03:40:17 2018] [<ffffffff8118a884>] oom_kill_process+0x254/0x3d0 [Thu Nov 1 03:40:17 2018] [<ffffffff811f9cd6>] mem_cgroup_oom_synchronize+0x546/0x570 [Thu Nov 1 03:40:17 2018] [<ffffffff811f9150>] ? mem_cgroup_charge_common+0xc0/0xc0 [Thu Nov 1 03:40:17 2018] [<ffffffff8118b114>] pagefault_out_of_memory+0x14/0x90 [Thu Nov 1 03:40:17 2018] [<ffffffff816a7f2e>] mm_fault_error+0x68/0x12b [Thu Nov 1 03:40:17 2018] [<ffffffff816bb741>] __do_page_fault+0x391/0x450 [Thu Nov 1 03:40:17 2018] [<ffffffff816bb835>] do_page_fault+0x35/0x90 [Thu Nov 1 03:40:17 2018] [<ffffffff816b7768>] page_fault+0x28/0x30 [Thu Nov 1 03:40:17 2018] Task in /slurm/uid_11122/job_57832937/step_0 killed as a result of limit of /slurm/uid_11122/job_57832937 [Thu Nov 1 03:40:17 2018] memory: usage 8388608kB, limit 8388608kB, failcnt 363061 [Thu Nov 1 03:40:17 2018] memory+swap: usage 8388608kB, limit 16777216kB, failcnt 0 [Thu Nov 1 03:40:17 2018] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0 [Thu Nov 1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB [Thu Nov 1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937/step_extern: cache:152KB rss:3944KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:3936KB inactive_file:76KB active_file:76KB unevictable:0KB [Thu Nov 1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937/step_batch: cache:0KB rss:4760KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:4760KB inactive_file:0KB active_file:0KB unevictable:0KB [Thu Nov 1 03:40:17 2018] Memory cgroup stats for /slurm/uid_11122/job_57832937/step_0: cache:6554284KB rss:1825468KB rss_huge:401408KB mapped_file:13556KB swap:0KB inactive_anon:439516KB active_anon:7937116KB inactive_file:1500KB active_file:1476KB unevictable:0KB [Thu Nov 1 03:40:17 2018] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name [Thu Nov 1 03:40:17 2018] [127018] 11122 127018 28354 476 12 0 0 bash [Thu Nov 1 03:40:17 2018] [139355] 11122 139355 63280 1372 33 0 0 srun [Thu Nov 1 03:40:17 2018] [139357] 11122 139357 12091 215 25 0 0 srun [Thu Nov 1 03:40:17 2018] [139370] 11122 139370 2419 422 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139443] 11122 139443 2155734 412939 953 0 0 java [Thu Nov 1 03:40:17 2018] [139473] 11122 139473 2564 432 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139477] 11122 139477 2563 432 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139485] 11122 139485 2550 422 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139508] 11122 139508 2563 434 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139526] 11122 139526 2572 435 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139539] 11122 139539 2573 426 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139558] 11122 139558 2563 431 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [139585] 11122 139585 2563 431 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [21062] 11122 21062 32157 931 14 0 0 bash [Thu Nov 1 03:40:17 2018] [91238] 11122 91238 2151 170 10 0 0 timeout [Thu Nov 1 03:40:17 2018] [91239] 11122 91239 1081 88 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [91240] 11122 91240 1612 180 9 0 0 ng [Thu Nov 1 03:40:17 2018] [104964] 11122 104964 2151 171 10 0 0 timeout [Thu Nov 1 03:40:17 2018] [104969] 11122 104969 1081 88 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [104975] 11122 104975 1612 181 8 0 0 ng [Thu Nov 1 03:40:17 2018] [123586] 11122 123586 2550 374 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [123592] 11122 123592 2151 171 10 0 0 timeout [Thu Nov 1 03:40:17 2018] [123593] 11122 123593 3325 171 12 0 0 sed [Thu Nov 1 03:40:17 2018] [123596] 11122 123596 1081 89 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [123601] 11122 123601 130767 40835 261 0 0 obabel [Thu Nov 1 03:40:17 2018] [125363] 11122 125363 2573 377 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [125369] 11122 125369 2151 171 10 0 0 timeout [Thu Nov 1 03:40:17 2018] [125372] 11122 125372 1089 81 8 0 0 uniq [Thu Nov 1 03:40:17 2018] [125373] 11122 125373 3324 171 11 0 0 sed [Thu Nov 1 03:40:17 2018] [125380] 11122 125380 1081 88 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [125388] 11122 125388 91272 1302 179 0 0 obabel [Thu Nov 1 03:40:17 2018] [125742] 11122 125742 2563 386 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [125743] 11122 125743 2151 170 10 0 0 timeout [Thu Nov 1 03:40:17 2018] [125744] 11122 125744 1089 81 8 0 0 uniq [Thu Nov 1 03:40:17 2018] [125745] 11122 125745 3324 171 12 0 0 sed [Thu Nov 1 03:40:17 2018] [125746] 11122 125746 1081 87 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [125747] 11122 125747 91272 1309 180 0 0 obabel [Thu Nov 1 03:40:17 2018] [125752] 11122 125752 2563 383 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [125753] 11122 125753 2151 170 9 0 0 timeout [Thu Nov 1 03:40:17 2018] [125754] 11122 125754 1089 82 9 0 0 uniq [Thu Nov 1 03:40:17 2018] [125755] 11122 125755 3324 172 11 0 0 sed [Thu Nov 1 03:40:17 2018] [125756] 11122 125756 1081 88 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [125757] 11122 125757 91272 1309 179 0 0 obabel [Thu Nov 1 03:40:17 2018] [125772] 11122 125772 2564 383 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [125773] 11122 125773 2151 170 10 0 0 timeout [Thu Nov 1 03:40:17 2018] [125774] 11122 125774 1088 86 7 0 0 uniq [Thu Nov 1 03:40:17 2018] [125775] 11122 125775 3324 172 12 0 0 sed [Thu Nov 1 03:40:17 2018] [125776] 11122 125776 1081 88 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [125778] 11122 125778 66934 902 131 0 0 obabel [Thu Nov 1 03:40:17 2018] [125819] 11122 125819 2563 383 10 0 0 one-step.sh [Thu Nov 1 03:40:17 2018] [125820] 11122 125820 2151 171 11 0 0 timeout [Thu Nov 1 03:40:17 2018] [125821] 11122 125821 1088 87 8 0 0 uniq [Thu Nov 1 03:40:17 2018] [125822] 11122 125822 3324 172 10 0 0 sed [Thu Nov 1 03:40:17 2018] [125823] 11122 125823 1081 88 8 0 0 time_bin [Thu Nov 1 03:40:17 2018] [125824] 11122 125824 66934 931 132 0 0 obabel [Thu Nov 1 03:40:17 2018] [126131] 11122 126131 40335 445 33 0 0 ps [Thu Nov 1 03:40:17 2018] [126132] 11122 126132 26980 166 10 0 0 head [Thu Nov 1 03:40:17 2018] [126133] 11122 126133 26990 153 10 0 0 column [Thu Nov 1 03:40:17 2018] Memory cgroup out of memory: Kill process 125649 (NGSession 36387) score 197 or sacrifice child [Thu Nov 1 03:40:17 2018] Killed process 139443 (java) total-vm:8622936kB, anon-rss:1637312kB, file-rss:960kB, shmem-rss:13484kBThe java application which runs in the background is always killed at the end by the oom-manager because it frees up most memory I think. Regarding the java program, I have checked the garbage collection/GC log files, all normal there. Also, I used three different versions for the JVM, but the problem seems to be independent of that. How can I find out what is really causing the oom-manager to terminate my programs? I am not an admin on the machines I use, they are compute nodes of a Linux cluster. The kernel version is 3.10.0-693.21.1.el7.x86_64.
Why is the Linux OOM killer terminating my programs?
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo, available on kernels 2.6.32, displayed as zero if not available)> So the manpage definition of Shared is not as helpful as it could be :(. If the tmpfs use does not reflect this high value of Shared, then the value must represent some process(es) "who did mmap() with MAP_SHARED|MAP_ANONYMOUS" (or System V shared memory). 6G of shared memory on an 8G system is still a lot. Seriously, you don't want that, at least not on a desktop. It's weird that it seems to contribute to "buff/cache" as well. But I did a quick test with python and that's just how it works. To show the processes with the most shared memory, use top -o SHR -n 1. System V shared memory Finally it's possible you have some horrible legacy software that uses system V shared memory segments. If they get leaked, they won't show up in top :(. You can list them with ipcs -m -t. Hopefully the most recently created one is still in use. Take the shmid number and e.g. $ ipcs -m -t------ Shared Memory Attach/Detach/Change Times -------- shmid owner attached detached changed 3538944 alan Apr 30 20:35:15 Apr 30 20:35:15 Apr 30 16:07:41 3145729 alan Apr 30 20:35:15 Apr 30 20:35:15 Apr 30 15:04:09 4587522 alan Apr 30 20:37:38 Not set Apr 30 20:37:38 # sudo grep 4587522 /proc/*/maps-> then the numbers shown in the /proc paths are the pid of the processes that use the SHM. (So you could e.g. grep the output of ps for that pid number). Apparent contradictionsXorg has 8G mapped. Even though you don't have separate video card RAM. It only has 150M resident. It's not that the rest is swapped out, because you don't have enough swap space. The SHM segments shown by ipcs are all attached to two processes. So none of them have leaked, and they should all show up in the SHR column of top (double-counted even). It's ok if the number of pages used is less than the size of the memory segment, that just means there are pages that haven't been used. But free says we have 6GB of allocated shared memory to account for, and we can't find that.
I am experiencing a weird issue lately: Sometimes (I cannot reproduce it on purpose), my system is using all its swap, despite there being more than enough free RAM. If this happens, the systems then becomes unresponsive for a couple of minutes, then the OOM killer kills either a "random" process which does not help much, or the X server. If it kills a "random" process, the system does not become responsive (there is still no swap but much free RAM); if it kills X, the swap is freed and the system becomes responsive again. Output of free when it happens: $ free -htl total used free shared buff/cache available Mem: 7.6G 1.4G 60M 5.7G 6.1G 257M Low: 7.6G 7.5G 60M High: 0B 0B 0B Swap: 3.9G 3.9G 0B Total: 11G 5.4G 60Muname -a: Linux fedora 4.4.7-300.fc23.x86_64 #1 SMP Wed Apr 13 02:52:52 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxSwapiness: cat /proc/sys/vm/swappiness 5Relevant section in dmesg: http://pastebin.com/0P0TLfsC tmpfs: $ df -h -t tmpfs Filesystem Size Used Avail Use% Mounted on tmpfs 3.8G 1.5M 3.8G 1% /dev/shm tmpfs 3.8G 1.7M 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup tmpfs 3.8G 452K 3.8G 1% /tmp tmpfs 776M 16K 776M 1% /run/user/42 tmpfs 776M 32K 776M 1% /run/user/1000Meminfo: http://pastebin.com/CRmitCiJ top -o SHR -n 1 Tasks: 231 total, 1 running, 230 sleeping, 0 stopped, 0 zombie %Cpu(s): 8.5 us, 3.0 sy, 0.3 ni, 86.9 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 7943020 total, 485368 free, 971096 used, 6486556 buff/cache KiB Swap: 4095996 total, 1698992 free, 2397004 used. 989768 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2066 mkamlei+ 20 0 8342764 163908 145208 S 0.0 2.1 0:59.62 Xorg 2306 mkamlei+ 20 0 1892816 138536 27168 S 0.0 1.7 1:25.47 gnome-shell 3118 mkamlei+ 20 0 596392 21084 13152 S 0.0 0.3 0:04.86 gnome-terminal- 1646 gdm 20 0 1502632 60324 12976 S 0.0 0.8 0:01.91 gnome-shell 2269 mkamlei+ 20 0 1322592 22440 8124 S 0.0 0.3 0:00.87 gnome-settings- 486 root 20 0 47048 8352 7656 S 0.0 0.1 0:00.80 systemd-journal 2277 mkamlei+ 9 -11 570512 10080 6644 S 0.0 0.1 0:15.33 pulseaudio 2581 mkamlei+ 20 0 525424 19272 5796 S 0.0 0.2 0:00.37 redshift-gtk 1036 root 20 0 619016 9204 5408 S 0.0 0.1 0:01.70 NetworkManager 1599 gdm 20 0 1035672 11820 5120 S 0.0 0.1 0:00.28 gnome-settings- 2386 mkamlei+ 20 0 850856 24948 4944 S 0.0 0.3 0:05.84 goa-daemon 2597 mkamlei+ 20 0 1138200 13104 4596 S 0.0 0.2 0:00.28 evolution-alarm 2369 mkamlei+ 20 0 1133908 16472 4560 S 0.0 0.2 0:00.49 evolution-sourc 2529 mkamlei+ 20 0 780088 54080 4380 S 0.0 0.7 0:01.14 gnome-software 2821 mkamlei+ 20 0 1357820 44320 4308 S 0.0 0.6 0:00.23 evolution-calen 2588 mkamlei+ 20 0 1671848 55744 4300 S 0.0 0.7 0:00.49 evolution-calen 2525 mkamlei+ 20 0 613512 8928 4188 S 0.0 0.1 0:00.19 abrt-applet ipcs: [mkamleithner@fedora ~]$ ipcs -m -t------ Shared Memory Attach/Detach/Change Times -------- shmid owner attached detached changed 294912 mkamleithn Apr 30 20:29:16 Not set Apr 30 20:29:16 393217 mkamleithn Apr 30 20:29:19 Apr 30 20:29:19 Apr 30 20:29:17 491522 mkamleithn Apr 30 20:42:21 Apr 30 20:42:21 Apr 30 20:29:18 524291 mkamleithn Apr 30 20:38:10 Apr 30 20:38:10 Apr 30 20:29:18 786436 mkamleithn Apr 30 20:38:12 Not set Apr 30 20:38:12 [mkamleithner@fedora ~]$ ipcs------ Message Queues -------- key msqid owner perms used-bytes messages ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 294912 mkamleithn 600 524288 2 dest 0x00000000 393217 mkamleithn 600 2576 2 dest 0x00000000 491522 mkamleithn 600 4194304 2 dest 0x00000000 524291 mkamleithn 600 524288 2 dest 0x00000000 786436 mkamleithn 600 4194304 2 dest ------ Semaphore Arrays -------- key semid owner perms nsems [mkamleithner@fedora ~]$ ipcs -m -t------ Shared Memory Attach/Detach/Change Times -------- shmid owner attached detached changed 294912 mkamleithn Apr 30 20:29:16 Not set Apr 30 20:29:16 393217 mkamleithn Apr 30 20:29:19 Apr 30 20:29:19 Apr 30 20:29:17 491522 mkamleithn Apr 30 20:42:21 Apr 30 20:42:21 Apr 30 20:29:18 524291 mkamleithn Apr 30 20:38:10 Apr 30 20:38:10 Apr 30 20:29:18 786436 mkamleithn Apr 30 20:38:12 Not set Apr 30 20:38:12 [mkamleithner@fedora ~]$ sudo grep 786436 /proc/*/maps /proc/2084/maps:7ff4a56cc000-7ff4a5acc000 rw-s 00000000 00:05 786436 /SYSV00000000 (deleted) /proc/3984/maps:7f4574d00000-7f4575100000 rw-s 00000000 00:05 786436 /SYSV00000000 (deleted)[mkamleithner@fedora ~]$ sudo grep 524291 /proc/*/maps /proc/2084/maps:7ff4a4593000-7ff4a4613000 rw-s 00000000 00:05 524291 /SYSV00000000 (deleted) /proc/2321/maps:7fa9b8a67000-7fa9b8ae7000 rw-s 00000000 00:05 524291 /SYSV00000000 (deleted)[mkamleithner@fedora ~]$ sudo grep 491522 /proc/*/maps /proc/2084/maps:7ff4a4ad3000-7ff4a4ed3000 rw-s 00000000 00:05 491522 /SYSV00000000 (deleted) /proc/2816/maps:7f2763ba1000-7f2763fa1000 rw-s 00000000 00:05 491522 /SYSV00000000 (deleted)[mkamleithner@fedora ~]$ sudo grep 393217 /proc/*/maps /proc/2084/maps:7ff4b1a60000-7ff4b1a61000 rw-s 00000000 00:05 393217 /SYSV00000000 (deleted) /proc/2631/maps:7fb89be79000-7fb89be7a000 rw-s 00000000 00:05 393217 /SYSV00000000 (deleted)[mkamleithner@fedora ~]$ sudo grep 294912 /proc/*/maps /proc/2084/maps:7ff4a5510000-7ff4a5590000 rw-s 00000000 00:05 294912 /SYSV00000000 (deleted) /proc/2582/maps:7f7902dd3000-7f7902e53000 rw-s 00000000 00:05 294912 /SYSV00000000 (deleted)getting the process names: [mkamleithner@fedora ~]$ ps aux | grep 2084 mkamlei+ 2084 5.1 2.0 8149580 159272 tty2 Sl+ 20:29 1:10 /usr/libexec/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3 mkamlei+ 5261 0.0 0.0 118476 2208 pts/0 S+ 20:52 0:00 grep --color=auto 2084 [mkamleithner@fedora ~]$ ps aux | grep 3984 mkamlei+ 3984 11.4 3.6 1355100 293240 tty2 Sl+ 20:38 1:38 /usr/lib64/firefox/firefox mkamlei+ 5297 0.0 0.0 118472 2232 pts/0 S+ 20:52 0:00 grep --color=auto 3984Should I also post the results for the other shmids? I don't really know how to interpret the output. How can I fix this? Edit: Starting the game "Papers, Please" always seems to trigger this problem after some time. It also happens sometimes when this game is not started, though. Edit2: Seems to be an X issue. On wayland this does not happen. Might be due to custom settings in xorg.conf. Final Edit: For anyone experiencing the same problem: I was using DRI 2. Switching to DRI 3 also fixes the problem. this is my relevant section in the xorg.conf:Section "Device" Identifier "Intel Graphics" Driver "intel" Option "AccelMethod" "sna" # Option "Backlight" "intel_backlight" BusID "PCI:0:2:0" Option "DRI" "3" #here Option "TearFree" "true" EndSectionThe relevant file on my system is in /usr/share/X11/xorg.conf.d/ .
Linux using whole swap, becoming unresponsive while there is plenty of free RAM
As you fill the memory with apps various block/filesystem caches are getting pushed out of the same memory. These caches are crucial for fast look up of files and other stuff. When there is no space for caches the kernel will try to look up all the information directly from the filesystem which is utterly slow and hence will cause high IO (more like a bottleneck). To solve this either add more memory or create a swap file or partition.
On my work laptop with an SSD and no swap, I sometimes run out of memory when running RAM-expensive applications (virtual machine, etc). When that happens, the system becomes slow (expected) but what I don't understand is why the disk usage LED lights up and stays that way until I manage to kill some tasks to free up memory. That happens every time the system runs out of memory even if there's absolutely no disk IO before that.
Why is IO so high when almost out of memory
You're misinterpreting the output of free. What you posted is showing that you have 19 GB of RAM free. The 23 GB you're seeing is used by the system as cache but is still readily available for applications. That is also why top shows the memory as free.. See Linuxatemyram.com for a more detailed explanation
The server has about 24GB memory. By running free -g I find the memory is used up total used free shared buffers cached Mem: 23 23 0 0 0 18 -/+ buffers/cache: 4 19 Swap: 56 2 53Then I did some research into what has used up all these memory by top then M. But it seems the memory is quite free in the %MEM column. What can I do to free some memory? This is a server for calculation so it is better not to restart the computer.
cannot find what has used all the memory
oom_adj is deprecated and provided for legacy purposes only. Internally Linux uses oom_score_adj which has a greater range: oom_adj goes up to 15 while oom_score_adj goes up to 1000. Whenever you write to oom_adj (let's say 9) the kernel does this: oom_adj = (oom_adj * OOM_SCORE_ADJ_MAX) / -OOM_DISABLE;and stores that to oom_score_adj. OOM_SCORE_ADJ_MAX is 1000 and OOM_DISABLE is -17. So for 9 you'll get oom_adj=(9 * 1000) / 17 ~= 529.411 and since these values are integers, oom_score_adj will hold 529. Now when you read oom_adj the kernel will do this: oom_adj = (task->signal->oom_score_adj * -OOM_DISABLE) / OOM_SCORE_ADJ_MAX;So for 529 you'll get: oom_adj = (529 * 17) / 1000 = 8.993 and since the kernel is using integers and integer arithmetic, this will become 8. So there... you write 9 and you get 8 because of fixed point / integer arithmetic.
I am trying to set the oom_adj value for the out of memory killer, and each time I do (regardless of the process) I get back exactly one less than I set (at least for positive integers. I haven't tried negative integers since I want these processes to be killed by OOM Killer first). [root@server ~]# echo 10 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 9 [root@server ~]# echo 9 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 8 [root@server ~]# echo 8 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 7 [root@server ~]# echo 7 > /proc/12581/oom_adj [root@server ~]# cat /proc/12581/oom_adj 6 [root@server ~]# Is this expected behavior? If not, why is this happening?
OOM Killer value always one less than set
Combination of three facts causes your oom problem: small page size, large VIRT, pagetables. Your logs clearly show that almost all the RAM was used by pagetables, not by process memory (for example, not by RESident pages - these got mostly pressed out to swap). The bummer about x86_64/x86 pagetables is that when you have multiple processes mapping exactly the same region of shared memory, they keep separate pagetables. Hence if one process maps 1 TB (it's included in the VIRT) kernel will create say 1 GB of pagetables (not shown in top at all, as these are not counted as belonging to a process). But if one hundred processes map the same 1 TB area, they take up 100 GB of your RAM to redunantly store the same metadata! The amount of VIRT of a single process could be simply caused by opening and mmaping a file (either named or "anonymous") although there could be a lot of alternative explanations. I guess the oom killer doesn't take size of pagetables into account when killing a process. In your case apparently mongodb was the primary candidate for oom kill in terms of RES usage. Despite the memory gain would be miniscule, the system had no choice, so it killed what it could kill. The most obvious way to avoid your problem would be to use huge pages, if only mongodb supported these (I'm not suggesting to use transparent huge pages, instead consider vanilla non-transparent huge pages). Cursory search says sadly mongodb doesn't support even non-transparent huge pages. Another way is to limit the number of spawned processes or somehow decrease their VIRT size.
I'm playing with MongoDB clusters. After few OOM killers, I decided to ulimit mongoDB with memory to 4G of RAM. After few hours, it was killed again with OOM. So my question is not about MongoDB, it's about memory management in linux. Here is an HTOP just a few minutes before OOM.Why are there 4.2T of VIRT and only 11M of RES? Some useful info: root@mongodb: pmap -d 24059 .... mapped: 4493752480K writeable/private: 2247504740K shared: 2246203932KHere is the dmesg log: [617568.768581] bash invoked oom-killer: gfp_mask=0x26000c0, order=2, oom_score_adj=0 [617568.768585] bash cpuset=/ mems_allowed=0 [617568.768590] CPU: 0 PID: 4686 Comm: bash Not tainted 4.4.0-83-generic #106-Ubuntu [617568.768591] Hardware name: Xen HVM domU, BIOS 4.2.amazon 02/16/2017 [617568.768592] 0000000000000286 00000000c18427a2 ffff8800a41f7b10 ffffffff813f9513 [617568.768595] ffff8800a41f7cc8 ffff8800ba798000 ffff8800a41f7b80 ffffffff8120b53e [617568.768597] ffffffff81cd6fd7 0000000000000000 ffffffff81e677e0 0000000000000206 [617568.768600] Call Trace: [617568.768605] [<ffffffff813f9513>] dump_stack+0x63/0x90 [617568.768609] [<ffffffff8120b53e>] dump_header+0x5a/0x1c5 [617568.768613] [<ffffffff81192ae2>] oom_kill_process+0x202/0x3c0 [617568.768614] [<ffffffff81192f09>] out_of_memory+0x219/0x460 [617568.768617] [<ffffffff81198ef8>] __alloc_pages_slowpath.constprop.88+0x938/0xad0 [617568.768620] [<ffffffff81199316>] __alloc_pages_nodemask+0x286/0x2a0 [617568.768622] [<ffffffff811993cb>] alloc_kmem_pages_node+0x4b/0xc0 [617568.768625] [<ffffffff8107eafe>] copy_process+0x1be/0x1b20 [617568.768627] [<ffffffff811c1e44>] ? handle_mm_fault+0xcf4/0x1820 [617568.768631] [<ffffffff81349133>] ? security_file_alloc+0x33/0x50 [617568.768633] [<ffffffff810805f0>] _do_fork+0x80/0x360 [617568.768635] [<ffffffff81080979>] SyS_clone+0x19/0x20 [617568.768639] [<ffffffff81840b72>] entry_SYSCALL_64_fastpath+0x16/0x71 [617568.768641] Mem-Info: [617568.768644] active_anon:130 inactive_anon:192 isolated_anon:0 active_file:197 inactive_file:202 isolated_file:20 unevictable:915 dirty:0 writeback:185 unstable:0 slab_reclaimable:27072 slab_unreclaimable:5594 mapped:680 shmem:19 pagetables:1974772 bounce:0 free:18777 free_pcp:1 free_cma:0 [617568.768646] Node 0 DMA free:15904kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15988kB managed:15904kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes [617568.768651] lowmem_reserve[]: 0 3745 7966 7966 7966 [617568.768654] Node 0 DMA32 free:49940kB min:5332kB low:6664kB high:7996kB active_anon:512kB inactive_anon:756kB active_file:776kB inactive_file:800kB unevictable:2828kB isolated(anon):0kB isolated(file):80kB present:3915776kB managed:3835092kB mlocked:2828kB dirty:0kB writeback:740kB mapped:2360kB shmem:52kB slab_reclaimable:69736kB slab_unreclaimable:8316kB kernel_stack:2272kB pagetables:3674424kB unstable:0kB bounce:0kB free_pcp:4kB local_pcp:4kB free_cma:0kB writeback_tmp:0kB pages_scanned:6592 all_unreclaimable? no [617568.768658] lowmem_reserve[]: 0 0 4221 4221 4221 [617568.768660] Node 0 Normal free:9264kB min:6008kB low:7508kB high:9012kB active_anon:8kB inactive_anon:12kB active_file:12kB inactive_file:8kB unevictable:832kB isolated(anon):0kB isolated(file):0kB present:4587520kB managed:4322680kB mlocked:832kB dirty:0kB writeback:0kB mapped:360kB shmem:24kB slab_reclaimable:38552kB slab_unreclaimable:14060kB kernel_stack:1680kB pagetables:4224664kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:14432 all_unreclaimable? yes [617568.768664] lowmem_reserve[]: 0 0 0 0 0 [617568.768667] Node 0 DMA: 0*4kB 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15904kB [617568.768675] Node 0 DMA32: 11687*4kB (UME) 410*8kB (UME) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 50028kB [617568.768682] Node 0 Normal: 1878*4kB (UME) 1*8kB (H) 1*16kB (H) 0*32kB 1*64kB (H) 1*128kB (H) 2*256kB (H) 0*512kB 1*1024kB (H) 0*2048kB 0*4096kB = 9264kB [617568.768691] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [617568.768692] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [617568.768693] 1275 total pagecache pages [617568.768694] 249 pages in swap cache [617568.768695] Swap cache stats: add 30567734, delete 30567485, find 17605568/26043265 [617568.768696] Free swap = 7757000kB [617568.768697] Total swap = 8388604kB [617568.768698] 2129821 pages RAM [617568.768699] 0 pages HighMem/MovableOnly [617568.768699] 86402 pages reserved [617568.768700] 0 pages cma reserved [617568.768701] 0 pages hwpoisoned [617568.768702] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name [617568.768706] [ 402] 0 402 25742 324 18 3 73 0 lvmetad [617568.768708] [ 440] 0 440 10722 173 22 3 301 -1000 systemd-udevd [617568.768710] [ 836] 0 836 4030 274 11 3 218 0 dhclient [617568.768711] [ 989] 0 989 1306 400 8 3 58 0 iscsid [617568.768713] [ 990] 0 990 1431 880 8 3 0 -17 iscsid [617568.768715] [ 996] 104 996 64099 210 28 3 351 0 rsyslogd [617568.768716] [ 1001] 0 1001 189977 0 33 4 839 0 lxcfs [617568.768718] [ 1005] 107 1005 10746 298 25 3 164 -900 dbus-daemon [617568.768720] [ 1031] 0 1031 1100 289 8 3 36 0 acpid [617568.768721] [ 1033] 0 1033 16380 290 36 3 203 -1000 sshd [617568.768723] [ 1035] 0 1035 7248 341 18 3 180 0 systemd-logind [617568.768725] [ 1038] 0 1038 68680 0 36 3 251 0 accounts-daemon [617568.768726] [ 1041] 0 1041 6511 376 17 3 57 0 atd [617568.768728] [ 1046] 0 1046 35672 0 27 5 1960 0 snapd [617568.768729] [ 1076] 0 1076 3344 202 11 3 45 0 mdadm [617568.768731] [ 1082] 0 1082 69831 0 38 4 342 0 polkitd [617568.768733] [ 1183] 0 1183 4868 357 14 3 73 0 irqbalance [617568.768734] [ 1192] 113 1192 27508 399 24 3 159 0 ntpd [617568.768735] [ 1217] 0 1217 3665 294 12 3 39 0 agetty [617568.768737] [ 1224] 0 1224 3619 385 12 3 38 0 agetty [617568.768739] [10996] 1000 10996 11312 414 25 3 206 0 systemd [617568.768740] [10999] 1000 10999 15306 0 33 3 475 0 (sd-pam) [617568.768742] [14125] 0 14125 23842 440 50 3 236 0 sshd [617568.768743] [14156] 1000 14156 23842 0 48 3 247 0 sshd [617568.768745] [14157] 1000 14157 5359 425 15 3 512 0 bash [617568.768747] [16461] 998 16461 11312 415 26 3 216 0 systemd [617568.768748] [16465] 998 16465 15306 0 33 3 483 0 (sd-pam) [617568.768750] [16470] 998 16470 4249 0 13 3 39 0 nrsysmond [617568.768751] [16471] 998 16471 63005 109 26 3 891 0 nrsysmond [617568.768753] [17374] 999 17374 283698 0 90 4 6299 0 XXX0 [617568.768754] [22123] 0 22123 8819 305 20 3 72 0 systemd-journal [617568.768756] [28957] 0 28957 6932 379 17 3 90 0 cron [617568.768758] [24059] 114 24059 1123438119 0 1973782 4288 127131 0 mongod [617568.768760] [ 4684] 0 4684 12856 433 29 3 117 0 sudo [617568.768761] [ 4685] 0 4685 12751 387 30 3 105 0 su [617568.768763] [ 4686] 0 4686 5336 312 15 3 493 0 bash [617568.768765] [18016] 999 18016 1127 145 7 3 25 0 sh [617568.768766] [18017] 999 18017 9516 212 20 4 611 0 XXX1 [617568.768767] [18020] 999 18020 1127 120 8 3 24 0 sh [617568.768769] [18021] 999 18021 9355 299 20 3 415 0 check-disk-usag [617568.768770] [18024] 0 18024 12235 353 27 3 123 0 cron [617568.768772] [18025] 1000 18025 2819 345 10 3 63 0 XXX2 [617568.768773] Out of memory: Kill process 24059 (mongod) score 508 or sacrifice child [617568.772529] Killed process 24059 (mongod) total-vm:4493752476kB, anon-rss:0kB, file-rss:0kBCan somebody shed some light on this matter? Like, why the RAM is full when MongoDB is using only 11M RES mem? Does VIRT also use RAM? If yes which virtual address space? Why did OOM kill it? too much pagetables (cause you see swap is almost empty) EDIT: This guy asked for sorted top: Run top, press f and then highlight %MEM and press s to set the sort order. Post the output. @Raman SailopalThis is, of course, another processID but still, it should be same. Output: pu(s): 0.3%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 3840468k total, 3799364k used, 41104k free, 12220k buffers Swap: 0k total, 0k used, 0k free, 70736k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP DATA COMMAND 16213 mongodb 20 0 3443g 176m 9408 S 0.7 4.7 211:22.66 3.4t 3.9g mongod 7706 sensu 20 0 661m 23m 804 S 0.0 0.6 20:20.92 637m 588m sensu-client 27120 ubuntu 20 0 595m 13m 7240 S 0.0 0.4 0:00.06 581m 569m mongo 24964 ubuntu 20 0 25936 8464 1708 S 0.0 0.2 0:00.54 17m 6736 bash 13858 ubuntu 20 0 26064 7620 728 S 0.0 0.2 0:00.75 18m 6864 bash
How can the OOM killer kill ulimit(ed) process?
On a 64-bit kernel you already have full 4G accessible by a 32-bit userspace program. See yourself by entering the following in the terminal (WARNING: your system may become unresponsive if it doesn't have free 4GiB of RAM when running this): cd /tmp cat > test.c <<"EOF" #include <stdlib.h> #include <stdio.h> int main() { size_t allocated=0; while(1) { const size_t chunkSize=4096; char* p=malloc(chunkSize); if(!p) return 0; *p=1; allocated+=chunkSize; printf("%zu\n",allocated); } return 0; } EOF gcc test.c -o test -m32 && ./test | tail -n1On my x86_64 kernel 3.12.18 I get 4282097664 as the result, which is about 4GiB-12.3MiB, so it's fair to consider 4G/xG split achieved.
It is an useful feature for systems which are using (...still have to use) 32-bit binaries, and the 4G limit came into consideration. It essentially means, that the 32-bit user-space code, the 32-bit user-space data and the (32-bit with PAE, or 64-bit) kernel live in different address spaces, which essentially enables for the processes to use nearly all of the possible maximal 4G address space for their data. Except some ancient announcements, unfortunately I couldn't find from it any more:I am pleased to announce the first public release of the "4GB/4GB VM split" patch, for the 2.5.74 Linux kernel: http://redhat.com/~mingo/4g-patches/4g-2.5.74-F8 The 4G/4G split feature is primarily intended for large-RAM x86 systems, which want to (or have to) get more kernel/user VM, at the expense of per-syscall TLB-flush overhead. On x86, the total amount of virtual memory - as we all know - is limited to 4GB. Of this total 4GB VM, userspace uses 3GB (0x00000000-0xbfffffff), the kernel uses 1GB (0xc0000000-0xffffffff). This is VM scheme is called the 3/1 split. This split works perfecly fine up until 1 GB of RAM - and it works adequately well even after that, due to 'highmem', which moves various larger caches (and objects) into the high memory area.On my tests, some of my processes start to dying roughly at 2-3 GB. How could I do achieve this? I use a relative recent kernel (4.10). I can use a 64-bit kernel on a 32-bit user space or use a 32-bit PAE kernel. It is enough, if only some of the processes use 4G/4G, but they seem to really need it.
How can I enable 4G/4G split in Linux?
Free RAM is wasted RAM; the fact that the amount of free RAM is low on your system is a good sign, not a bad one. What’s important is the amount of RAM used by applications, and stalls related to excessive swap use. In your case, the amount of RAM used is low compared to the amount installed, and there isn’t anything to be concerned about. On this type of graph, the only things to watch out for are excessive swap use and excessive RAM use, and even then the only self-sufficient indicator is excessive RAM use. Excessive swap use is only a concern if there’s excessive swap activity, i.e. the system is spending too much time swapping pages out and back in, and you can’t see that from this graph.
When visualizing some memory related metrics on server level, I get a chart which looks like this:The area below the blue line is RAM Used. The area below the red line and above the blue line is RAM Cache + Buffer. The area below the black line and above the red line is RAM Free. The area below the orange line and above the black line is SWAP Used. As you can see in the chart: RAM Used is slightly decreasing over time (or at least it is not increasing). But RAM Free is decreasing as well due to an increase of RAM Cache + Buffer. We try to estimate if this server will run out of memory in the future and therefore created a trend line for RAM Free which is obviously decreasing and therefore the trend analysis suggests that there is no RAM Free anymore in the near future and memory problems will occur. My questions are now:Is this a valid approach or should we rather focus on a combined metric (e.g. RAM Free + Ram Cache + Buffer) or only RAM Used? Is a strongly decreasing RAM Free and an increasing RAM Cache + Buffer a dangerous sign regarding the available memory or is this nothing to worry about? If this is no valid approach at all, what can one derive from such a visualization or from such metrics?
RAM Free decreases over time due to increasing RAM Cache + Buffer
In the absence of any process writing something to /proc/sys/kernel/sysrq (possibly via the sysctl command) at any point since boot (including in the initramfs)¹, the default value will be as configured at kernel compilation time. You can find that out with: $ grep -i sysrq "/boot/config-$(uname -r)" CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x01b6 CONFIG_MAGIC_SYSRQ_SERIAL=y CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""Here for me (on Debian as well), it's enabled by default but with 0x01b6, that is 438 or 0b110110110 as the mask. To check the current value: $ cat /proc/sys/kernel/sysrq 438 $ sysctl kernel.sysrq kernel.sysrq = 438That's 2|4|16|32|128|256 so: 2 = 0x2 - enable control of console logging level 4 = 0x4 - enable control of keyboard (SAK, unraw) 16 = 0x10 - enable sync command 32 = 0x20 - enable remount read-only 128 = 0x80 - allow reboot/poweroff 256 = 0x100 - allow nicing of all RT tasksSo all but: 8 = 0x8 - enable debugging dumps of processes etc. 64 = 0x40 - enable signalling of processes (term, kill, oom-kill)You can check what bit of the bitmask allows which key in drivers/tty/sysrq.c in the kernel source code. f is allowed by SYSRQ_ENABLE_SIGNAL with value 0x0040, that is 64 above without surprise. And that bit also controls e (end all tasks), j (thaw all frozen FS),i (killall tasks). So it's not possible to enable all except f. The best you can do is enable all but e, f, i, j by adding the 0x8 (SYSRQ_ENABLE_DUMP) bit which governs c, l, t, p, w, z, m (also quite dangerous) to the default by writing 446 to /proc/sys/kernel/sysrq. However, I would only deviate from the safer 438 default when debugging some kernel related issue where you lose shell access to the machine, or if no non-admin has physical access to a keyboard or serial line connected to the machine.¹ also note the sysrq_always_enabled kernel command line parameter which bypasses all restrictions.
I was following a guide for automatically decrypting the hard drive on boot, using self-generated keys, and tpm2 variables, and near the end it makes this point that seems to make sense: https://blastrock.github.io/fde-tpm-sb.html#disable-the-magic-sysrq-keyThe magic SysRq key allows running some special kernel actions. The most dangerous ones are disabled by default, and you should keep them that way for maximum security. For example, one of them (f) will invoke the OOM-killer. This function could kill your lockscreen, giving full access to your desktop to a malicious user.The problem is that I only found how to disable all sysrq keys, e.g. https://askubuntu.com/questions/911522/how-can-i-enable-the-magic-sysrq-key-on-ubuntu-desktop or https://askubuntu.com/questions/11002/alt-sysrq-reisub-doesnt-reboot-my-laptop, using something adding a /etc/sysctl.d/90-sysrq.conf file with this line: kernel.sysrq=1I would like if possible to be able to use all the other keys e.g. REISUB in case the system crashes, and only have the F key disabled. I also found this article https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html, that mentions adding a bitmask like: 2 = 0x2 - enable control of console logging level 4 = 0x4 - enable control of keyboard (SAK, unraw) 8 = 0x8 - enable debugging dumps of processes etc. 16 = 0x10 - enable sync command 32 = 0x20 - enable remount read-only 64 = 0x40 - enable signalling of processes (term, kill, oom-kill) 128 = 0x80 - allow reboot/poweroff 256 = 0x100 - allow nicing of all RT tasksbut I don't understand how to have only sysrq-f disabled, and all other keys at their default value. The current setup on my laptop (debian 12), is the following: $ grep -IirF sysrq /etc/sysctl.* /etc/sysctl.conf:# 0=disable, 1=enable all, >1 bitmask of sysrq functions /etc/sysctl.conf:# See https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html /etc/sysctl.conf:#kernel.sysrq=438$ grep -IirF sysrq /etc/sysctl.d/* /etc/sysctl.d/99-sysctl.conf:# 0=disable, 1=enable all, >1 bitmask of sysrq functions /etc/sysctl.d/99-sysctl.conf:# See https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html /etc/sysctl.d/99-sysctl.conf:#kernel.sysrq=438
Disable sysrq f (OOM-killer) but leave other sysrq keys operational
The overcommit_memory setting is taken into account in three places in the memory-management subsystem.The main one is __vm_enough_memory in mm/util.c, which decides whether enough memory is available to allow a memory allocation to proceed (note that this is a utility function which isn’t necessarily invoked). If overcommit_memory is 1, this function always succeeds. If it’s 2, it checks the actual available memory. If it’s 0, it uses the famous heuristic which you mention; that proceeds as follows:count the number of free pages add the number of file-backed pages (these can be recovered) remove pages used for shared memory add swap pages add reclaimable pages account for reserved pages leave some memory for root (if the allocation isn’t being done by a cap_sys_admin process)The resulting total is used as the threshold for memory allocations. mmap also checks the setting: MAP_NORESERVE is honoured if overcommit is allowed (modes 0 and 1), and results in allocations with no backing swap (VM_NORESERVE). In this particular case, mode 0 is effectively equivalent to mode 1; this is what “calls of mmap(2) with MAP_NORESERVE are not checked” is referring to: it means that MAP_NORESERVE mmap calls will always succeed, and over-allocation will result in the OOM-killer stepping in after the fact, or a segment violation when a write is attempted. shmem has similar behaviour to mmap.Running out of address space should cause allocation failures, not OOM-kills, since the allocation can’t actually proceed.
I read through the docs of man proc. When it comes to the overcommit_memory, the heuristics in overcommit_memory=0 isn't understood well. What the heuristics actually mean? does "calls of mmap(2) with MAP_NORESERVE are not checked" mean that the Kernel only allocate virtual memory without being aware of even the existence of swap space? This file contains the kernel virtual memory accounting mode. Values are: 0: heuristic overcommit (this is the default) 1: always overcommit, never check 2: always check, never overcommit In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed".Apart from preceding questions, will exhaustion of the virtual address space cause the OOM regardless of the enough remaining physical memory. Thanks.
What does heuristics in Overcommit_memory =0 mean?
Your problem is that you don't have any swap space. Operating systems require a swap space so that they are able to free up ram space and store it on the hard drive. What you are going to need to do is reformat your hard drive. Red Hat has a suggest swap size chart here. Load up the arch live cd and repartition and swapon /dev/sdaX. If you need a reference see the Arch Wiki Beginner's Guide. I'll suggest a partition like the following one. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 298.1G 0 disk ├─sda1 8:1 0 100M 0 part /boot ├─sda2 8:2 0 20G 0 part / ├─sda3 8:3 0 4G 0 part [SWAP] └─sda4 8:4 0 rest 0 part /homeThis is just suggested, you can do everything in a single partition and not worry about much (but this is the basic format that most people use). If you are keeping your root partition separate then remember to keep it around 20-25G. This is a security thing, because users should be installing programs into their own folders. You won't run out of space, I promise. Pacman and yaourt will take care of this for you.
My Computer has been freezing a lot lately, and with no apparent reason.It freezes even if my usage is 3% CPU and 9% RAM. I was using Windows 8 until I installed Ubuntu 14.04. It was really slow, and after some researching, I adopted the idea that Ubuntu 14.04 wasn't really that stable, so I decided I'd download a less resource-heavy distro, so I installed Arch Linux (which is what I'm using to type this now) with GNOME. I'm not having any of the problems I used to have in Ubuntu, except for this mostly annoying freeze that happens to be absolutely random .. My Fan is working correctly, so it's not temperature, and my drivers are up-to-date (they're the same ones I used on Windows, which I had no problem at all with). Note that: The Whole OS just freezes, and when I was once able to Alt+F2 (to get to the run-a-command dialog) and managed to type in a command (I was struggling with the keyboard to type) and hit Enter, I got the message: No enough memory .. ? Which is pretty unexpected because I'm using a minimal system (arch linux) with only one application running .. Edit: Here's my /etc/fstab file # # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> # /dev/sda3 UUID=2268132b-7cfa-4c55-b773-467c4f691e83 / ext4 rw,relatime,data=ordered 0 1/dev/disk/by-uuid/2236F90308C55145 /mnt/2236F90308C55145 auto nosuid,nodev,nofail,x-gvfs-show,user 0 0 /dev/disk/by-uuid/4FF142A03DACFA48 /mnt/4FF142A03DACFA48 auto nosuid,nodev,nofail,x-gvfs-show,user 0 0lsblk outputs .. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 298.1G 0 disk ├─sda1 8:1 0 69.9G 0 part /mnt/2236F90308C55145 ├─sda2 8:2 0 59.2G 0 part /mnt/4FF142A03DACFA48 ├─sda3 8:3 0 90.3G 0 part / └─sda4 8:4 0 78.7G 0 part sr0 11:0 1 1024M 0 rom
Linux freezing randomly
It looks like it is:oom_score = badness * 1000 / totalpagesbased on the kernel code https://github.com/torvalds/linux/blob/master/fs/proc/base.c#L549. static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, struct pid *pid, struct task_struct *task) { unsigned long totalpages = totalram_pages + total_swap_pages; unsigned long points = 0; points = oom_badness(task, NULL, NULL, totalpages) * 1000 / totalpages; seq_printf(m, "%lu\n", points); return 0; }
Whilst reading both https://lwn.net/Articles/391222/ and http://man7.org/linux/man-pages/man5/proc.5.html I have come across the terms oom_score and badness. Both numbers have the same basic meaning; the higher they are, the more likely the associated task is to be OOM-killed when the host is under memory pressure. What is the relationship (if any) between the two numbers? EDIT: My guess is oom_score = max(badness + oom_score_adj, 0) but I haven't found any proof
What is the relationship between oom_score and badness?
I don't think there is a way to limit swap space, unless you modify the program to only request non-swappable memory, which even if possible would probably be impractical. However what you can and should do is limit the total amount of memory available to the process. You can use cgroups (the new-ish general way), ulimit (setrlimit, the traditional way), or the timeout tool.
While developing some software, a program under test sometimes eats all the memory, then proceeds to yomp into the swap space and start thrashing the disk, leading to a predictable drop in responsiveness to the point that I generally switch to another terminal to log in and kill the process manually. What I'd like is for this particular process to get killed before it starts eating swap space like there's no tomorrow. I found a github page in which killing processes with a watchdog is discussed (and indeed, done) - https://github.com/rfjakob/earlyoom - and I could alter that code a little to seek out and kill only this specific faulty program, but it would be nice if I could simply deny use of swap space to a nominated process and have it simply get killed. I suppose even more awkwardly, it's be fine for it to get a small amount of swap space in the normal course of things; it's only when it's on a quest to consume all the memory in the universe that it needs killing.
Can I deny use of swap space to a specific process (and have it just get killed)?
The RES column in your output from top shows the amount of physical memory used by each process. (The memory used by a process that is RESident in physical memory. This is distinct from VIRTual memory allocated by each process.) Just in the subset shown there is 4GB used. There is 1.8GB used as cache. This can be discarded automatically by the system as soon as there is a real need for physical memory (contrary to some posters elsewhere you do not need to drop these caches manually). That brings the total used to 5.8GB. You provide information about 13 processes of 272. I would say it's quite possible that there are enough processes unlisted to consume the remaining 0.8GB that you mention in the question title.
I'm trying to figure out why my Linux machine is so slow and I found this: $ free --human total used free shared buff/cache available Mem: 7,3Gi 6,6Gi 168Mi 1,0Gi 1,8Gi 746Mi Swap: 9,3Gi 2,7Gi 6,6GiWhen I run top -n1 -b -o+RES | head -n20 I can't see any process that uses so much memory. Even the cache is not filled this much. top - 07:37:45 up 23 min, 2 users, load average: 1,31, 1,41, 1,04 Tasks: 272 total, 1 running, 271 sleeping, 0 stopped, 0 zombie %Cpu(s): 8,7 us, 13,0 sy, 0,0 ni, 78,3 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st MiB Mem : 7457,6 total, 150,1 free, 5718,4 used, 2481,4 buff/cache MiB Swap: 9536,0 total, 9524,2 free, 11,8 used. 1739,2 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5038 isumis 20 0 2122864 1,3g 23568 S 0,0 17,8 0:42.42 rust-an+ 3675 isumis 20 0 11,5g 401788 167268 S 0,0 5,3 4:52.22 firefox+ 3883 isumis 20 0 38,7g 327144 78416 S 0,0 4,3 1:32.40 WebExte+ 2712 isumis 20 0 3909688 275568 112448 S 0,0 3,6 0:46.07 plasmas+ 7724 isumis 20 0 2718576 240288 79788 S 0,0 3,1 1:39.39 Isolate+ 8244 isumis 20 0 2654028 214240 95748 S 0,0 2,8 0:16.32 Isolate+ 7926 isumis 20 0 1123,1g 211408 119956 S 0,0 2,8 0:04.49 1passwo+ 8283 isumis 20 0 2621440 183860 101344 S 0,0 2,4 0:16.16 Isolate+ 8142 isumis 20 0 2595336 176064 96568 S 5,6 2,3 0:13.75 Isolate+ 1179 root 20 0 1691464 171008 25304 S 0,0 2,2 0:19.97 dockerd 7992 isumis 20 0 32,3g 160576 41824 S 0,0 2,1 0:01.14 1passwo+ 4908 isumis 20 0 1130,9g 156720 58864 S 0,0 2,1 0:11.68 code 4808 isumis 20 0 1122,0g 144336 53780 S 0,0 1,9 0:05.30 codeEven after closing VS Code (rust analyzer) there is still 3GB in use. Is there something I can do to figure out the problem? I'm using Debian 12 on a Lenovo T470p.
Linux uses 6.6Gi RAM for nothing
I think what you are looking for is --memfree in GNU Parallel: find ... | parallel --memfree 1G dostuffThis will only start dostuff if there is 1G RAM free. It will start one more until there is either less than 1G RAM free or 1 job running per CPU thread. If there is 0.5G RAM free (50% of 1G RAM) the youngest job will be killed. So in metacode: limit = 1G while true: if freemem > limit: if count(running_jobs) < cpu.threads(): another_job.start() if freemem < 0.5 * limit youngest_job.kill()If combined with --retries 10 you can tell GNU Parallel to retry the killed job 10 times. If dostuff takes a while to gobble up the memory, use --delay 30s to wait 30s before spawning the next job.
My software runs a command that looks something like:find | xargs do a potentially memory hungry jobThe problem is that sometimes a potentially memory hungry job gets too hungry, the system gets unresponsive and I have to reboot it. My understanding is that it happens due to the memory allocation over commitment. What I would like to happen is that if a job spawned by xargs wants more memory than is available, it dies (I am OK with it) and that is it. I guess I can get this behavior if I turn off overcommitment system-wide, but it is not an option. Is it possible to turn it off for a process? A possible solution I was thinking of was to set ulimit -v RAM sizeBut something tells me it is not a good idea.
Prevent the machine being slowed down by running out of memory
Please Try this: Of a list of included builtins in ksh: $ ksh -c 'builtin' This are the only builtins useful to answer your question: echo kill print printf readSo, it seems that the only way to "read a file" is to use read. Lets define a couple of functions (copy and paste in the CLI): function Usage { echo "fileread: filename [from line] [to line]" exit 1 }function fileread { [ "$#" -lt 1 ] && echo "please supply the name of a file" && Usage linestart=${2:-1} lineend=${3:-0} i=0 while IFS=$'\n' read line; do i=$((i+1)) [[ "$i" -lt "$linestart" ]] && continue [[ "$lineend" != 0 && "$i" -gt "$lineend" ]] && continue echo "$i $line" done <"$1" }And then, call the function (as an example): $ cd /var/run $ fileread sshd.pid 10 20
Context: an AIX lpar with very low memory (no forking possible, so only shell's builtins (cd, echo, kill) will work). I can have a (hmc) console to it, but I need a better way to start freing memory in AIX, when memory is too low to even allow you to do a "ps -ef". (I have a way, but it is a way to randomly kill existing pids. I need to have more info on the PID I can kill, so I can choose an unimportant PID) I want to know :How could I see the content of files using only ksh' builtins and the ultimate goal: what file's content could I look at, using only builtins, to choose the pids to kill, so that I only kill "mundane" process, (when I killed enough PIDs, I will then be able to "ps -ef" "netstat -rn" etc, and "ps" should still show the "important" processes)What I already know:I can log in the console (ssh user@hmc, vtmenu, choose the lpar with OutOfMemory problems, log as root, and, after a while (2-5 minutes) and several complaints that ksh can't fork commandes in /etc/profile, I get to a (ksh) prompt.Now I can simulate "ls" to see what /proc/PID# directory exists: cd /proc ; echo * will get me the list of still running PIDs. (usually I'll see 0, 1(init), which are not to be killed, and also a whole bunch of other PIDs, with little indication of what process they run (ksh? syncd? ls? java?).I can also : kill some pids here to free memory enough (kill is a builtin in ksh (or bash!), so no need to fork to use it) and when I killed enough PIDs, I am then able to then do a ps -ef netstat -rn etc, allowing me to get the state of the server before I shutdown -rF to reboot it from the lpar itself (This will sync, close filesystems, etc. Note that the alternative, a reboot from the HMC, is usually not possible (as it probably tries to fork some commands), unless you add "--immed", which is like powering off directly and is not advisable as it can cause filesystem problems, causing sometimes very lengthy fsck when restarting the lpar).killing some PIDs and running the shutdown: allow me to get some "ps -ef" ideas of what was running and needs to restart, get the routes (in case the static routes don't match), and shutdown "more gracefully", preserving the filesystem and avoiding lengthy fsck when it starts-up.)But I need your help to also:See the content of some files! (for ex: to be able to see the pid of some of the pid files in /var/run/*.pid, I'd do : cd /var/run and then echo *pid to get the list of pid files, but then, with only builtins of ksh (remember: no forking!) how can I get the content of one of those files?). The same trick could also help to get some infos underneath /proc/PID#/ ..., maybe allowing me to also choose the right PID to kill)choose PIDs "wisely" using the above (or whatever trick you can have)Precision: Bonus points if your trick works with this version ksh builtins: prompt# strings /usr/bin/ksh | grep '\..*\.' | grep builtin 0@(#)27 1.57.14.5 src/bos/usr/bin/ksh/builtin.c, cmdksh, bos61Z, z2013_29A2 7/5/13 00:10:52
AIX - use ksh builtins to free memory when fork not possible
Really, the best solution for the OOM killer is not to have one. Configure your system not to use overcommitted memory, and refuse to use applications and libraries that depend on it. In this day of infinite disk, why not supply infinite swap? No need to commit to swap unless the memory is used, right? The answer to your question may be that the OOM killer doesn't work the way you think it does. The OOM killer uses heuristics to choose which process to kill, and the rules don't always mean that the last requestor dies. Cf. Taming the OOM killer. So it's not a question of the OOM killer being "ineffective", but rather one of it making a choice other than the one you'd prefer.
If I type in my shell x=`yes`, eventually I will get cannot allocate 18446744071562067968 bytes (4295032832 bytes allocated) because yes tries to write into x forever until it runs out of memory. I get a message cannot allocate <memory> because the kernel's OOM-killer told xrealloc there are no more bytes to allocate, and that it should exit immediately. But, when I ask any game_engine to allocate more graphics memory that does not exist because I have insufficient resources, it turns to my RAM and CPU to allocate the requested memory there instead. Why doesn't the kernel's OOM-killer ever catch any game_engine trying to allocate tons of memory, like it does with x=`yes`? That is, if I'm running game_engine and my user hasn't spawned any new processes since memory-hog game_engine, why does said game_engine always succeed in bringing my system to its unresponsive, unrecoverable knees without OOM-killer killing it?I use game engines as an example because they tend to allocate tons and tons of memory on my poor little integrated card, but this seems to happen with many resource-intensive X processes. Are there cases under which the OOM-killer is ineffective or not able to revoke a process' memory?
Why does OOM-killer sometimes fail to kill resource hogs?
It's caused by the process exiting between top getting the process list and top trying to get info on that particular process. It's more common on a very busy box but generally safe to ignore. You might consider it a bug, you might not.
I have 100+ boxes running FreeBSD 8.4 amd64 RELEASE (p9) with the same configuration. And only one of them sometimes behaves strangely: load average (generally 4~6, it's ok course box have 8 CPU cores) grows up to 30-40, system running slow and top starts to print kvm_open: cannot open /proc/[some_numbers]/mem messages. When load average goes down, such messages not appears anymore. The question is not how to fight with high la, but what does kvm_open: cannot open /proc mean? System does not running out of memory, as I see.
kvm_open: cannot open /proc
In the end, it was a problem with the nct6775 driver. It was loaded by the /etc/modules file. After removing it the error disappeared.
I have an Ubuntu 20.04.4 server with 32GB RAM. The server is running a bunch of LXD containers and two VMs (libvirt+qemu+kvm). After startup, with all services running, the RAM utilization is about ~12GB. After 3-4 weeks the RAM utilization reaches ~90%. If I stop all containers and VMs the utilization is still ~20GB. However, I cannot figure out who is claiming this memory. I have already tried clearing the cache, but that doesn't change much. I compiled the kernel with support for kmemleak but it did not detect anything useful but shows up in slabtop. systemd-cgtop: / 593 - 23.7G - - machine.slice - - 1.4G - - system.slice 116 - 301.1M - - user.slice 11 - 141.9M - - user.slice/user-1000.slice 11 - 121.6M - - system.slice/systemd-journald.service 1 - 83.8M - - user.slice/user-1000.slice/session-297429.scope 5 - 81.0M - - system.slice/libvirtd.service 22 - 46.2M - - user.slice/user-1000.slice/[emailprotected] 6 - 39.8M - - system.slice/snapd.service 36 - 19.8M - - system.slice/cron.service 1 - 19.3M - - init.scope 1 - 14.0M - - system.slice/systemd-udevd.service 1 - 13.2M - - system.slice/multipathd.service 7 - 10.8M - - system.slice/NetworkManager.service 3 - 5.8M - - system.slice/networkd-dispatcher.service 1 - 5.4M - - system.slice/ssh.service 1 - 5.0M - - system.slice/ModemManager.service 3 - 4.5M - - system.slice/systemd-networkd.service 1 - 3.5M - - system.slice/accounts-daemon.service 3 - 3.5M - - system.slice/udisks2.service 5 - 3.4M - - system.slice/polkit.service 3 - 3.0M - - system.slice/rsyslog.service 4 - 2.8M - - system.slice/systemd-resolved.service 1 - 2.4M - - system.slice/unattended-upgrades.service 2 - 1.8M - - system.slice/dbus.service 1 - 1.8M - - system.slice/systemd-logind.service 1 - 1.7M - - system.slice/smartmontools.service 1 - 1.5M - - system.slice/systemd-machined.service 1 - 1.5M - - system.slice/systemd-timesyncd.service 2 - 1.4M - - system.slice/virtlogd.service 1 - 1.3M - - system.slice/rtkit-daemon.service 3 - 1.2M - -/proc/meminfo: MemTotal: 32718604 kB MemFree: 11480728 kB MemAvailable: 11612788 kB Buffers: 28 kB Cached: 144512 kB SwapCached: 855404 kB Active: 520504 kB Inactive: 541588 kB Active(anon): 441708 kB Inactive(anon): 484240 kB Active(file): 78796 kB Inactive(file): 57348 kB Unevictable: 18664 kB Mlocked: 18664 kB SwapTotal: 33043136 kB SwapFree: 32031680 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 94680 kB Mapped: 126592 kB Shmem: 660 kB KReclaimable: 432484 kB Slab: 10784740 kB SReclaimable: 432484 kB SUnreclaim: 10352256 kB KernelStack: 10512 kB PageTables: 5052 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 49402436 kB Committed_AS: 1816364 kB VmallocTotal: 34359738367 kB VmallocUsed: 152512 kB VmallocChunk: 0 kB Percpu: 8868864 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB FileHugePages: 0 kB FilePmdMapped: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 19383100 kB DirectMap2M: 14053376 kB DirectMap1G: 0 kBslabtop: Active / Total Objects (% used) : 30513607 / 33423869 (91.3%) Active / Total Slabs (% used) : 1384092 / 1384092 (100.0%) Active / Total Caches (% used) : 123 / 203 (60.6%) Active / Total Size (% used) : 9965969.20K / 10757454.91K (92.6%) Minimum / Average / Maximum Object : 0.01K / 0.32K / 16.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 27156909 26001970 95% 0.30K 1194104 26 9552832K kmemleak_object 754624 742232 98% 0.06K 11791 64 47164K kmalloc-64 654675 278378 42% 0.57K 23382 28 374112K radix_tree_node 593436 348958 58% 0.08K 11636 51 46544K Acpi-State 559744 418325 74% 0.03K 4373 128 17492K kmalloc-32 496320 483104 97% 0.12K 15510 32 62040K kernfs_node_cache 487104 155952 32% 0.06K 7611 64 30444K vmap_area 394240 165965 42% 0.14K 14080 28 56320K btrfs_extent_map 355580 342674 96% 0.09K 7730 46 30920K trace_event_file 339573 338310 99% 4.00K 42465 8 1358880K kmalloc-4k 306348 154794 50% 0.19K 7410 42 59280K dentry 145931 104400 71% 1.13K 11552 28 369664K btrfs_inode 137728 137174 99% 0.02K 538 256 2152K kmalloc-16 112672 74034 65% 0.50K 3671 32 58736K kmalloc-512 102479 62366 60% 0.30K 4093 26 32744K btrfs_delayed_node 68880 66890 97% 2.00K 4305 16 137760K kmalloc-2k 66656 48345 72% 0.25K 2083 32 16664K kmalloc-256 64110 47818 74% 0.59K 2376 27 38016K inode_cache 50176 50176 100% 0.01K 98 512 392K kmalloc-8 44710 43744 97% 0.02K 263 170 1052K lsm_file_cache 43056 11444 26% 0.25K 1418 32 11344K pool_workqueue 36480 29052 79% 0.06K 570 64 2280K kmalloc-rcl-64 33920 25846 76% 0.06K 530 64 2120K anon_vma_chain 24822 14264 57% 0.19K 832 42 6656K kmalloc-192 23552 23552 100% 0.03K 184 128 736K fsnotify_mark_connector 23517 17994 76% 0.20K 603 39 4824K vm_area_struct 19572 14909 76% 0.09K 466 42 1864K kmalloc-rcl-96 18262 15960 87% 0.09K 397 46 1588K anon_vma 14548 12905 88% 1.00K 459 32 14688K kmalloc-1k 14162 14162 100% 0.05K 194 73 776K file_lock_ctx 13104 12141 92% 0.09K 312 42 1248K kmalloc-96 13062 13062 100% 0.19K 311 42 2488K cred_jar 13056 10983 84% 0.12K 408 32 1632K kmalloc-128 12192 8922 73% 0.66K 508 24 8128K proc_inode_cache 11730 11444 97% 0.69K 1444 46 46208K squashfs_inode_cache 11067 11067 100% 0.08K 217 51 868K task_delay_info 10752 10752 100% 0.03K 84 128 336K kmemleak_scan_area 10656 8666 81% 0.25K 333 32 2664K filp 10252 10252 100% 0.18K 235 44 1880K kvm_mmu_page_header 10200 10200 100% 0.05K 120 85 480K ftrace_event_field 10176 10176 100% 0.12K 318 32 1272K pid 9906 9906 100% 0.10K 254 39 1016K Acpi-ParseExt 9600 9213 95% 0.12K 300 32 1200K kmalloc-rcl-128 9520 9520 100% 0.07K 170 56 680K Acpi-Operand 8502 8063 94% 0.81K 218 39 6976K sock_inode_cache 7733 7733 100% 0.70K 169 46 5408K shmem_inode_cache 7392 7231 97% 0.19K 176 42 1408K skbuff_ext_cache 6552 6552 100% 0.19K 163 42 1304K kmalloc-rcl-192 6480 6480 100% 0.11K 180 36 720K khugepaged_mm_slot 6144 6144 100% 0.02K 24 256 96K ep_head 5439 5439 100% 0.42K 147 37 2352K btrfs_ordered_extent 5248 4981 94% 0.25K 164 32 1312K skbuff_head_cache 4792 4117 85% 4.00K 606 8 19392K biovec-max 4326 4326 100% 0.19K 103 42 824K proc_dir_entry 4125 4125 100% 0.24K 125 33 1000K tw_sock_TCPv6 3978 3978 100% 0.10K 102 39 408K buffer_head 3975 3769 94% 0.31K 159 25 1272K mnt_cache 3328 3200 96% 1.00K 104 32 3328K RAW 3136 3136 100% 1.12K 112 28 3584K signal_cache 3072 2560 83% 0.03K 24 128 96K dnotify_struct 2910 2820 96% 1.06K 97 30 3104K UNIX 2522 2396 95% 1.19K 97 26 3104K RAWv6 2448 2448 100% 0.04K 24 102 96K pde_opener 2400 2400 100% 0.50K 75 32 1200K skbuff_fclone_cache 2112 2080 98% 1.00K 66 32 2112K biovec-64 1695 1587 93% 2.06K 113 15 3616K sighand_cache 1518 1518 100% 0.69K 33 46 1056K files_cache 1500 1500 100% 0.31K 60 25 480K nf_conntrack 1260 894 70% 6.06K 252 5 8064K task_struct 1260 1260 100% 1.06K 42 30 1344K mm_struct 1222 1158 94% 2.38K 94 13 3008K TCPv6 1150 1150 100% 0.34K 25 46 400K taskstats 924 924 100% 0.56K 33 28 528K task_group 888 888 100% 0.21K 24 37 192K file_lock_cache 864 864 100% 0.11K 24 36 96K btrfs_trans_handle 855 855 100% 2.19K 62 14 1984K TCP 851 851 100% 0.42K 23 37 368K uts_namespace 816 816 100% 0.12K 24 34 96K seq_file 816 816 100% 0.04K 8 102 32K ext4_extent_status 792 792 100% 0.24K 24 33 192K tw_sock_TCP 782 782 100% 0.94K 23 34 736K mqueue_inode_cache 720 720 100% 0.13K 24 30 96K pid_namespace 704 704 100% 0.06K 11 64 44K kmem_cache_node 648 648 100% 1.16K 24 27 768K perf_event 640 640 100% 0.12K 20 32 80K scsi_sense_cache 624 624 100% 0.30K 24 26 192K request_sock_TCP 624 624 100% 0.15K 24 26 96K fuse_request 596 566 94% 8.00K 149 4 4768K kmalloc-8k 576 576 100% 1.31K 24 24 768K UDPv6 494 494 100% 0.30K 19 26 152K request_sock_TCPv6 480 480 100% 0.53K 16 30 256K user_namespace 432 432 100% 1.15K 16 27 512K ext4_inode_cache 416 416 100% 0.25K 13 32 104K kmem_cache 416 416 100% 0.61K 16 26 256K hugetlbfs_inode_cache 390 390 100% 0.81K 10 39 320K fuse_inode 306 306 100% 0.04K 3 102 12K bio_crypt_ctx 292 292 100% 0.05K 4 73 16K mbcache 260 260 100% 1.56K 13 20 416K bdev_cache 256 256 100% 0.02K 1 256 4K jbd2_revoke_table_s 232 232 100% 4.00K 29 8 928K names_cache 192 192 100% 1.98K 12 16 384K request_queue 170 170 100% 0.02K 1 170 4K mod_hash_entries 168 168 100% 4.12K 24 7 768K net_namespace 155 155 100% 0.26K 5 31 40K numa_policy 132 132 100% 0.72K 3 44 96K fat_inode_cache 128 128 100% 0.25K 4 32 32K dquot 128 128 100% 0.06K 2 64 8K ext4_io_end 108 108 100% 2.61K 9 12 288K x86_emulator 84 84 100% 0.19K 2 42 16K ext4_groupinfo_4k 68 68 100% 0.12K 2 34 8K jbd2_journal_head 68 68 100% 0.12K 2 34 8K abd_t 64 64 100% 8.00K 16 4 512K irq_remap_cache 64 64 100% 2.00K 4 16 128K biovec-128 63 63 100% 4.06K 9 7 288K x86_fpu 56 56 100% 0.07K 1 56 4K fsnotify_mark 56 56 100% 0.14K 2 28 8K ext4_allocation_context 42 42 100% 0.75K 1 42 32K dax_cache 40 40 100% 0.20K 1 40 8K ip4-frags 36 36 100% 7.86K 9 4 288K kvm_vcpu 30 30 100% 1.06K 1 30 32K dmaengine-unmap-128 24 24 100% 0.66K 1 24 16K ovl_inode 15 15 100% 2.06K 1 15 32K dmaengine-unmap-256 6 6 100% 16.00K 3 2 96K zio_buf_comb_16384 0 0 0% 0.01K 0 512 0K kmalloc-rcl-8 0 0 0% 0.02K 0 256 0K kmalloc-rcl-16 0 0 0% 0.03K 0 128 0K kmalloc-rcl-32 0 0 0% 0.25K 0 32 0K kmalloc-rcl-256 0 0 0% 0.50K 0 32 0K kmalloc-rcl-512 0 0 0% 1.00K 0 32 0K kmalloc-rcl-1k 0 0 0% 2.00K 0 16 0K kmalloc-rcl-2k 0 0 0% 4.00K 0 8 0K kmalloc-rcl-4k 0 0 0% 8.00K 0 4 0K kmalloc-rcl-8k 0 0 0% 0.09K 0 42 0K dma-kmalloc-96 0 0 0% 0.19K 0 42 0K dma-kmalloc-192 0 0 0% 0.01K 0 512 0K dma-kmalloc-8 0 0 0% 0.02K 0 256 0K dma-kmalloc-16 0 0 0% 0.03K 0 128 0K dma-kmalloc-32 0 0 0% 0.06K 0 64 0K dma-kmalloc-64 0 0 0% 0.12K 0 32 0K dma-kmalloc-128 0 0 0% 0.25K 0 32 0K dma-kmalloc-256 0 0 0% 0.50K 0 32 0K dma-kmalloc-512 0 0 0% 1.00K 0 32 0K dma-kmalloc-1k 0 0 0% 2.00K 0 16 0K dma-kmalloc-2k 0 0 0% 4.00K 0 8 0K dma-kmalloc-4k 0 0 0% 8.00K 0 4 0K dma-kmalloc-8k 0 0 0% 0.12K 0 34 0K iint_cache 0 0 0% 1.00K 0 32 0K PING 0 0 0% 0.75K 0 42 0K xfrm_state 0 0 0% 0.37K 0 43 0K request_sock_subflow 0 0 0% 1.81K 0 17 0K MPTCP 0 0 0% 0.62K 0 25 0K dio 0 0 0% 0.19K 0 42 0K userfaultfd_ctx_cache 0 0 0% 0.03K 0 128 0K ext4_pending_reservation 0 0 0% 0.08K 0 51 0K ext4_fc_dentry_update 0 0 0% 0.04K 0 102 0K fat_cache 0 0 0% 0.81K 0 39 0K ecryptfs_auth_tok_list_item 0 0 0% 0.02K 0 256 0K ecryptfs_file_cache 0 0 0% 0.94K 0 34 0K ecryptfs_inode_cache 0 0 0% 2.82K 0 11 0K dm_uevent 0 0 0% 3.23K 0 9 0K kcopyd_job 0 0 0% 1.19K 0 26 0K PINGv6 0 0 0% 0.18K 0 44 0K ip6-frags 0 0 0% 2.00K 0 16 0K MPTCPv6 0 0 0% 0.13K 0 30 0K fscrypt_info 0 0 0% 0.25K 0 32 0K fsverity_info 0 0 0% 1.25K 0 25 0K AF_VSOCK 0 0 0% 0.19K 0 42 0K kcf_sreq_cache 0 0 0% 0.50K 0 32 0K kcf_areq_cache 0 0 0% 0.19K 0 42 0K kcf_context_cache 0 0 0% 4.00K 0 8 0K zfs_btree_leaf_cache 0 0 0% 0.44K 0 36 0K ddt_entry_cache 0 0 0% 1.22K 0 26 0K zio_cache 0 0 0% 0.05K 0 85 0K zio_link_cache 0 0 0% 0.50K 0 32 0K zio_buf_comb_512 0 0 0% 1.00K 0 32 0K zio_buf_comb_1024 0 0 0% 1.50K 0 21 0K zio_buf_comb_1536 0 0 0% 2.00K 0 16 0K zio_buf_comb_2048 0 0 0% 2.50K 0 12 0K zio_buf_comb_2560 0 0 0% 3.00K 0 10 0K zio_buf_comb_3072 0 0 0% 3.50K 0 9 0K zio_buf_comb_3584 0 0 0% 4.00K 0 8 0K zio_buf_comb_4096 0 0 0% 8.00K 0 4 0K zio_buf_comb_5120 0 0 0% 8.00K 0 4 0K zio_buf_comb_6144 0 0 0% 8.00K 0 4 0K zio_buf_comb_7168 0 0 0% 8.00K 0 4 0K zio_buf_comb_8192 0 0 0% 12.00K 0 2 0K zio_buf_comb_10240 0 0 0% 12.00K 0 2 0K zio_buf_comb_12288 0 0 0% 16.00K 0 2 0K zio_buf_comb_14336 0 0 0% 16.00K 0 2 0K lz4_cache 0 0 0% 0.24K 0 33 0K sa_cache 0 0 0% 0.96K 0 33 0K dnode_t 0 0 0% 0.32K 0 24 0K arc_buf_hdr_t_full 0 0 0% 0.38K 0 41 0K arc_buf_hdr_t_full_crypt 0 0 0% 0.09K 0 42 0K arc_buf_hdr_t_l2only 0 0 0% 0.08K 0 51 0K arc_buf_t 0 0 0% 0.38K 0 42 0K dmu_buf_impl_t 0 0 0% 0.37K 0 43 0K zil_lwb_cache 0 0 0% 0.15K 0 26 0K zil_zcw_cache 0 0 0% 0.13K 0 30 0K sio_cache_0 0 0 0% 0.15K 0 26 0K sio_cache_1 0 0 0% 0.16K 0 24 0K sio_cache_2 0 0 0% 1.06K 0 30 0K zfs_znode_cache 0 0 0% 0.09K 0 46 0K zfs_znode_hold_cache
Linux server high memory usage without applications
One solution would be to create a separate partition on the disk for this special applicaiton's data storage. You can set the partition size to what ever you want and then mount the partition under your home directory. Then as long as no one else has access to the partition (i.e. write permissions) then you should effectively have that space set aside on disk for your particular usage.
I want to install a data manipulation solution. The solution is deployed in a folder in home directory. Free space in disk is uncontrollable and can shrink at any moment (other users data). How can I at first preserve say 100 giga bytes for only one folder. Is it possible? if yes, then How?
Is there a way in Linux to preserve folder size for safety
So, it turns out a colleague was experimenting with large-page-support and didn't revert all changes he made. When I ran sysctl -w vm.nr_hugepages=0and commented out this section in the /etc/sysctl.conf # Hugepage Support MySQL #vm.hugetlb_shm_group = 27 #kernel.shmmax = 10737418240 #kernel.shmall = 23689185 #vm.nr_hugepages = 46268it freed up 90 GB which were wasted. This could be seen in the output of cat /proc/meminfo: HugePages_Total: 46268 HugePages_Free: 46268 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kBHuge thanks go out to Matthew Ife. Please upvote his answer over at serverfault.com instead of this one.
I have a dedicated MySQL server equipped with 128 GB RAM. MySQL recently gets killed by the oom-killer, although MySQL is configured to use 95 GB in the worst case. In my research I came across this: # cat /proc/11895/status Name: mysqld State: S (sleeping) Tgid: 11895 Pid: 11895 PPid: 24530 TracerPid: 0 Uid: 27 27 27 27 Gid: 27 27 27 27 Utrace: 0 FDSize: 1024 Groups: 27 VmPeak: 72188044 kB VmSize: 72122508 kB VmLck: 0 kB VmHWM: 33294036 kB VmRSS: 32829668 kB VmData: 72076496 kB VmStk: 88 kB VmExe: 11800 kB VmLib: 3608 kB VmPTE: 73388 kB VmSwap: 4139376 kB Threads: 59I'm wondering, why is the VmHWM and VmRSS at only around 33 GB whereas on another server (also a slave to the same master, configured almost the same (except for buffer pool), except, that it has 256 GB RAM), the output is as follows: # cat /proc/51298/status Name: mysqld State: S (sleeping) Tgid: 51298 Pid: 51298 PPid: 50443 TracerPid: 0 Uid: 27 27 27 27 Gid: 27 27 27 27 Utrace: 0 FDSize: 2048 Groups: 27 VmPeak: 243701128 kB VmSize: 239628932 kB VmLck: 0 kB VmHWM: 209331200 kB VmRSS: 205515868 kB VmData: 239582156 kB VmStk: 88 kB VmExe: 11800 kB VmLib: 3608 kB VmPTE: 409600 kB VmSwap: 0 kB Threads: 281Here the memory is used to about 80%, whereas on the oom-killed server it's only about 25% (note, that these values where observed shortly before the oom-killer strikes again). What could be the reason? There is no competing process. And what can I do about it?
VmHWM only 25% whereas it should be around 80%
free memory is completely unused, while available memory can be freed by the kernel immediately if it is needed. It contains things such as file system cache, avoiding reads from the disk and speeding up the system. If you look closely, you can see that the available amount is similar to buff/cache. Thus, the kernel should not invoke the OOM killer unless the available memory is exhausted. As Andrew pointed out in the comments: The safe option is to disable overcommitting of memory in the kernel. That way, when a program requests more memory than is currently available, the malloc call will return NULL instead of succeeding. This means that there can never be more memory allocated than physically available, so the OOM killer will (hopefully) never be invoked: # Assuming swap is disabled because it is an embedded system: echo 100 > /proc/sys/vm/overcommit_ratio # Commit max 100% of physical RAM (+ Swap, which is off) echo 2 > /proc/sys/vm/overcommit_memory # Disable overcommit heuristicsHowever, this requires that (a) your programs do not request much more memory than they intend to use, and (b) that you check the return value of all malloc calls and do something sensible when they return NULL. Otherwise you risk the same behavior as the OOM killer (your process randomly dying), in this case due to a segfault/null dereference. More tuning advice is difficult without information on the specific scenario you are facing. But if physical memory is exhausted while overcommitted, there is not much that the kernel can do apart from: swapping, killing a process, or panicking. You could try to enable zram or zswap (thus "swapping" to RAM) or add a swapfile, but both will likely degrade system performance when memory is (nearly) full. Better make sure that your application has no memory leaks.
I am working on an embedded Linux system which is on SOC platform. I have 2 machines ran the same memory workload, and I got following memory output. Machine 1. total used free shared buff/cache available Mem: 50616 35304 2516 48 12796 13100 Swap: 0 0 0Machine 2. total used free shared buff/cache available Mem: 57328 45320 2856 56 9152 9572 Swap: 0 0 0Machine 1 has less free memory than machine2, but machine 1 has more available memory than machine 2. In this case, which machine is in higher risk of triggerring OOM killer? Is there any memory tuning advice? Updated with setting overcommit_memory (sort of off-topic) Per Fritz answer, in another system, I changed the overcommit_memory to 2, no other changes, I got following. # cat /proc/sys/vm/overcommit_ratio 50 # # free total used free shared buff/cache available Mem: 84244 25256 35196 92 23792 56772 Swap: 0 0 0 # echo 2 > /proc/sys/vm/overcommit_memory # # ls -/bin/sh: can't fork: Cannot allocate memoryThe ratio is 50, but it said cannot allocate memory when disable memory overcommit. Even echo 100 > /proc/sys/vm/overcommit_ratio, when echo 2 > /proc/sys/vm/overcommit_memory, it still hit the error, and I had to reboot system. So per my testing, changing memory overcommit might not get the pre-defined Segmentation Fault. I had accepted Fritz's answer on the kernel memory reclaims on available and free. We may open another question to discuss Linux memory overcommit.
Which is the trigger of OOM killer, free or availaible memory in Linux?
There’s no system call, or library function, as far as I’m aware. No need for getpid() though, you can open /proc/self/oom_score_adj directly.
I have an important process that the OOM Killer has taken a fancy to with unfortunate results. I would like to make this less likely. All google turns up is stuff like: echo -1000 > /proc/${PID}/oom_score_adjwhile I would like to do it in the program source itself. Is there a library call or syscall to do this, or is my only option getpid(), open(), write() & close() ?
Is there a library call or syscall to set /proc/self/oom_score_adj?
If you are using a LVM, you can allocate/create a new Logical Volume a format it as a swap space. lvcreate -n swap2 -L 2G VG_NAME mkswap /dev/VG_NAME/swap2 swapon -ae.g the above will create a 2G LogicalVolume partition on VolumeGroup named VG_NAME, then format the LV as swap, and activated it.
Memory Swap Ratio Company System Today, a monitoring system indicated that one of the systems in the company has run out of memory. Executing htop on this system indicated that the Memory was nearly full (~8GB) even as the Swap Space (~0.5GB). Stopping some of the services decreased the Memory use, but the Swap Space remained fully allocated. According to this documentation it seems that the Swap Space of this Company's system does not match the recommended space: S = M < 2 ? M * 2 : M + 2 Default Swap Space Test System Executing htop on a test system results in:which means a ratio of ~1:1 (1877/1799). Subquestion: Why does the default Swap Memory Space Ratio not equal 2:1?Clear and Increase Swap Space on a running system Increase Swap Space This documentation indicates that several commands should be executed in order to increase the Swap Space. Are all these commands really required? Clear Swap Space According to this documentation the Swap Space could be cleared by executing: swapoff -a && swapon -a Safely increase Swap Space This documentation recommends to create a backup before increasing the Swap Space, but it does not indicate the possible impact. As this concerns a production system it is important to know if it is safe to increase the Swap Space on the system while it runs or e.g. should a new system be created and the date subsequently be moved?Question: What is the fastest and safest way to increase the Swap Space on Scientific Linux?
Fastest and Safest way to increase Swap Space on Scientific Linux
I found the solution to my problem here. The workaround: I increased the memory used to 1024M with these instructions. I set the "Maximum files opened for read/write" to a 101. I ran the application from the command line with this command: sudo bash -c 'ulimit -n 8192'; sudo -u username ./azureus
I have a program Vuze that is written in Java, which I use to download very large files, and I'm having a problem with it. I need to increase the amount of memory it uses. I've followed the directions for the application but it doesn't change the real memory usage. I would think this would then be because Java (JVM) is not set to support the amount of memory I set in the application. I both get errors about files missing and low memory. How can I increase the memory used by my Java Virtual Machine? My Java is Oracle. My system is Fedora 20 X86_64 KDE.
How to increase the memory used by Java in linux?
I found that I had not nested OOMScoreAdjust under the [Service] heading, and so it was not applied. That explains why it worked for some processes (ones where the value was properly nested under [Service],) but not others. Values set by choom don't appear to persist across reboots.
We're struggling with mysql being killed by OOMKiller since upgrading from Debian 9 to Debian 11. I see that several .service files have OOMScoreAdjust=### defined, but they don't seem to be honored, and choom tells me the score adjust values for these services are 0. The value is also ignored for other services besides mysql, like netdata but seems to be honored for systemd, which defaults to an adjust value of -1000. Is specifying OOMScoreAdjust in .service files no longer valid in Debian 11? I would guess that's not it, because systemd's score is correctly read by choom. So is something else going on? Besides choom telling me the adjust score is 0, the process continues to be killed, which makes me quite certain that the value is not being honored, but I don't know why that is. I'm not sure if this issue is specific to Debian or what, since I don't have enough information, nor do I know where to look next.
OOMScoreAdjust in .service files is ignored?
This feature is not available in Linux 3.10 which comes with CentOS 7.0. The change was commited two years later: "mm/oom_kill: count global and memory cgroup oom kills"
In Ubuntu 20.04 I can find oom_kill counter at file /proc/vmstat. Where I can find this metric in CentOS 7?
oom_kill counter in CentOS 7
From man xz:Memory usage Especially users of older systems may find the possibility of very large memory usage annoying. To prevent uncomfortable surprises, xz has a built-in memory usage limiter, which is disabled by default. The memory usage limiter can be enabled with the command line option --memlimit=limit. Often it is more convenient to enable the limiter by default by setting the environment variable XZ_DEFAULTS.
I'm trying to compress a large archive with multi-threading enabled, however, my system keeps freezing up and runs out of memory. OS: Manjaro 21.1.0 Pahvo Kernel: x86_64 Linux 5.13.1-3-MANJARO Shell: bash 5.1.9 RAM: 16GB|swapon| NAME TYPE SIZE USED PRIO /swapfile file 32G 0B -2I've tried this with a /swapfile 2x the amount of RAM I have (32GB) but the system would always freeze once >90% of total RAM has been used, and would seem to not make use of the /swapfile. |xz --info-memory| Total amount of physical memory (RAM) : 15910 MiB Memory usage limit for compression: Disabled Memory usage limit for decompression: DisabledI'm new to using xz so please bear with me, but is there a way to globally enable the memory usage limiter and for the Total amount of physical memory (RAM) to take into account the space made available by /swapfile?
xz: OOM when compressing 1TB .tar
An order of 0 is one page.page allocation order The 'order' of a page allocation is it's logarithm to the base 2, and the size of the allocation is 2order, an integral power-of-2 number of pages. 'Order' ranges from from 0 to MAX_ORDER-1. The smallest - and most frequent - page allocation is 20 or 1 page.(https://linux-mm.org/PageAllocation#page_allocation_order)
foobar.exe invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 What is an order=0 allocation? That's less than one page, so is it like a kmalloc32 or something smaller than page_size? Linux 3.x kernel x86_64
What does order=0 mean in mem-info data (Orders are powers of two allocations, so does it mean no pages were being allocated?)