output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
The difference is due to windows and linux using different CPU throttling profiles. You do have some control over this on linux. For example, the following command will show you which profile is currently being used: cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor There are ways to choose which profiles to use. The Arch Linux wiki has good information on this, it may be worth a read: CPU Frequency Scaling - Arch Wiki There is an additional issue of fan control -- you need to make sure you have the proper drivers for controllin your fans and that they are set to a high enough speed when gaming. Linux on Laptops can be a helpful resource.
Using a Lenovo Legion Y520 with i7-7700HQ (base clock 2.8Ghz) and GTX 1050. I'm getting CPU overheating warnings in linux and it's affecting my performance in games (found in Payday 2 and CS:GO). I've never had problems in Windows. This is what I found when trying to troubleshoot this issue: In Windows 10 (using aida64)Windows stays at around 3.4 Ghz on idle (because my power settings are set to 'high performance' instead of the default 'balanced'), with a temperature of around 50C.When stressing the cpu, the temperature goes slowly (in a couple seconds instead of instantly) from about 50C to around 75C and stays there comfortably. Clock speeds are about 2.9Ghz when stressing. Utilization is always 100%. Aida64 doesn't report throttling. The voltage on the CPU core goes from about 1.1 to 0.9 when stressing.In Arch Linux (using s-tui)Linux stays at around 2.0Ghz on idle, with a temperature of around 50C.Here's where it gets weird: when stressing the cpu, the temperature IMMEDIATELY goes from 50C to about 93C. Clock speeds are exactly 3.4Ghz when stressing. Utilization is always 100%. When turing the stress test off, the temperature IMMEDIATELY goes back to about 50C, as if nothing ever happened. The laptop certainly doesn't feel like it heats up to 90C+ when doing this, even after a long stress.Here's an image that shows how temperature, power, and frequency all go down at the exact same time. Notice how much cpu temperature changes in so little time.How do I fix this throttling issue? Do I undervolt my CPU in linux? How come it reads temperatures wrong in Linux but not in Windows? I changed the profile using cpupower from powersave to performance. I still see the same throttling in s-tui. There is a jump up in idle cpu frequency when setting to performance (instead of around 2000-2500Mhz to always at 3400Mhz), but that's the only thing that has changed. Fan control I tried to control fans using fancontrol (lm_sensors), but pwmconfig says there are no pwm-capable sensor modules installed. I tried it with NBFC, but it doesn't seem to be doing anything, no matter what profile I choose. I don't even know if NBFC can control my fans, but it doesn't report any errors when choosing a profile. I also tried thinkfan, but it doesn't seem to help with throttling. It also thinks my fan's speed is at 8RPM, see this thread Solution I found that lowering the maximum allowed cpu frequency using cpupower to something like 3100MHz instead of the default 3800 fixes all issues. sudo cpupower frequency-set -u 3100MHz I also changed max_freq in /etc/default/cpupower to the same value, to make it permanent. I found that this does result in a slight fps drop in games, but nothing serious. At least my fps is stable :) Sadly I think this might result in decreased performance in non-gaming tasks like when compiling something. After 1.5 years I just stability tested Windows again (with AIDA64) and found it now also thermal throttles. As you can see in the image below the temperatures jump quickly to the high 90s and AIDA64 reports throttling. The clock speed idles at 3.4GHz and a few seconds after starting the test it drops to around 800MHz, before jumping up to 3.4GHz again a second later. It doesn't decide to lower the clock speed while stresstesting to something like 2.9GHZ (like before).How come it suddenly stopped lowering the maximum frequency in Windows?
CPU temperatures in linux: throttling or wrong reading?
[Edit: Concluding thoughts regarding the processor choice]AMD vs AMD:Richland does a much better job than Trinity here. Kaveri cannot compete with Richland's idle mode power dissipation (at least for now). The GPU of the A10-6700 may be overrated, but it's a bit sad it won't be used much. Some algorithms may be able to deploy its computational power. No idea how that will affect the processor's power consumption, though. I suspect the A10-6790K to be the same processor as the A10-6700 with just a different parameter set for Turbo Core boosts. If this is true, the A10-6790K will be able to boost longer and/or provide higher frequencies in the long term due to its higher TDP. But you'll need a different CPU fan for that (think space and temperature/life span).AMD A10-6700 vs Intel Core i3-3220:The A10-6700 has a lot more GPU power, which is unused here. The i3-3220 has a lower idle mode power dissipation. While in typical benchmarks the i3-3220 is faster for computations, I cannot see how its two hyper-threading cores would be able to handle parallel requests (say, to a database with web frontends) as fast as four fully featured cores (at least when assuming some serious caching). Didn't find any serious benchmarks, though -- Only some indications.[Edit: The free radeon driver's bapm parameter is set by default for Kaveri, Kabini and desktop Trinity, Richland systems as of Linux 3.16] See [pull] radeon drm-fixes-3.16. However, regarding 3.16 based Debian, the defaults don't (yet?) seem to work, while the boot parameter does. See How to set up a Debian system (focus on 2D or console/server) with an AMD Turbo Core APU for maximum energy and computing efficiency? [Edit: The free radeon driver will soon have a bapm parameter] Since the bottom line of the below is to use a patched version of the free radeon driver with your APU to support Turbo Core and get the most out of it (except 3D graphics that is) if you can (enabling bapm can lead to instabilities in some configurations), it's great news that future versions of radeon will have a parameter to enable bapm. [Original post follows] AMD A10-6700 (Richland) APU Experience Processor Choice My first PC was a 486DX2-66 set up from dozens of 3.5" floppy disks containing Slackware source packages. Sice then, a lot of things have changed, and a lot of industries currently seem to be in the phase where they still increase the number of product variants. This circumstance and some of AMD's unfortunate decisions in the recent past haven't made it easier for me to decide on a platform for a mini server. But finally, I decided that the A10-6700 would be a good choice:Several reviews have shown that a (still widely unavailable) Kaveri will consume more power in idle mode than a Richland or a Trinity The advantage of the Richland A10-6700 over the Trinity A10-5700 seem to be significant: Lower lowest and higher highest frequency, more fine-grained Turbo Core (considering also temperature -- quite an advantage when the GPU will be idle) The GPU of the A10-6700 is said to be overrated (marketing-driven naming) and the APU's pricing seems fairOther Components and Setup Despite the countless processors to choose from, there aren't many Mini-ITX boards available. The ASRock FM2A78M-ITX+ appeared to be a reasonable choice. The test was done with firmware V1.30 (no updates available as I write this). Only 80% of a power supply's nominal output should be consumed. On the other hand, many fail to work efficiently below 50% load. It's very difficult to find an energy efficient power supply for a system with an estimated power dissipation range of 35W to 120W. I conducted these tests with a Seasonic G360 80+ Gold because it outperforms most competitors regarding efficiency at low loads. Two 8GB DDR3-1866 RAMs (configured as such -- which does make a difference as compared to 1333), one SSD drive and a PWM controleld quality CPU fan were also part of the test setup. The measurements were made using an AVM Fritz!DECT 200 which has been reported to perform accurate measurements. Still, plausibility was validated using an older no-name device. No inconsistencies could be identified. The measured system power dissipation will include the power supply's reduced efficiency for lower loads. A [W]QHD screen was connected via HDMI. The initial shared memory for the GPU was set to 32M in the UEFI BIOS. Also, the Onboard GPU was selected as Primary, and IOMMU was enabled. No X or other graphical system was installed or configured. Video output was restricted to console mode. Basics There are a few things one needs to know.While the decision about Cool'n'Quiet is made by software outside of the processors, Turbo Core is a decision made autonomously by an additional microcontroller on the APU (or CPU). Many tools as well as /proc and /sys places don't report Turbo Core activity. cpufreq-aperf, cpupower frequency-info and cpupower monitor do, but only after modprobe msr.Test Case Group 1: Linux + radeon I started with fresh Arch Linux (installer 2014.08.01, kernel 3.15.7). Key factor here is the presence of acpi_cpufreq (kernel CPU scaling) and radeon (kernel GPU driver) and the easy way to patch radeon. Test Case 1.1: BIOS TC on - CnQ on / Linux OnDemand - BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 1 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... ondemand "cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 1800 - 3700 Observed "cpupower monitor" Freq range... 1800 - 3700 /sys/kernel/debug/dri/0/radeon_pm_info... power level 0Load | Core Freqs ---------------+----------- stress --cpu 1 | 1 x 3700 stress --cpu 2 | 2 x 3700 stress --cpu 3 | 3 x 3700 stress --cpu 4 | 4 x 3700Test Case 1.2: BIOS TC on - CnQ on / Linux Performance - BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 1 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... performance "cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 3700 Observed "cpupower monitor" Freq range... 2000 - 3700 /sys/kernel/debug/dri/0/radeon_pm_info... power level 0Load | Core Freqs ---------------+----------- stress --cpu 1 | 1 x 3700 stress --cpu 2 | 2 x 3700 stress --cpu 3 | 3 x 3700 stress --cpu 4 | 4 x 3700Test Case Group 1 Summary Turbo Core based boosts are impossible in this scenario because the radeon driver currently disables the bapm flag due to stability issues in some scenarios. Therefore, further testing was skipped.Test Case Group 2: Linux + bapm-patched radeon In order to enable bapm, I started with a fresh Arch Linux (installer 2014.08.01, kernel 3.15.7), got me the core linux package via ABS (3.15.8), edited the PKGBUILD to use pkgbase=linux-tc, pulled the sources with makepkg --nobuild, changed pi->enable_bapm = true; in trinity_dpm_init() in src/linux-3.15/drivers/gpu/drm/radeon/trinity_dpm.c, and compiled it with makepkg --noextract. Then, I installed it (pacman -U linux-tc-headers-3.15.8-1-x86_64.pkg.tar.xz and pacman -U linux-tc-3.15.8-1-x86_64.pkg.tar.xz) and updated GRUB (grub-mkconfig -o /boot/grub/grub.cfg but, of course, YMMV). As a result, I was given the choice to boot linux or linux-tc, and the following tests refer to the latter. Test Case 2.1: BIOS TC on - CnQ on / Linux OnDemand - BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 1 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... ondemand "cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 1800 - 3700 Observed "cpupower monitor" Freq range... 1800 - 4300 /sys/kernel/debug/dri/0/radeon_pm_info... power level 0Load | Core Freqs ---------------+----------------- stress --cpu 1 | 1 x 4300 stress --cpu 2 | 2 x 4200 .. 4100 stress --cpu 3 | 3 x 4100 .. 3900 stress --cpu 4 | 4 x 4000 .. 3800Test Case 2.2: BIOS TC on - CnQ on / Linux Performance - BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 1 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... performace"cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 3700 Observed "cpupower monitor" Freq range... 2000 - 4300 /sys/kernel/debug/dri/0/radeon_pm_info... power level 0Load | Core Freqs ---------------+----------------- stress --cpu 1 | 1 x 4300 stress --cpu 2 | 2 x 4200 .. 4100 stress --cpu 3 | 3 x 4100 .. 3900 stress --cpu 4 | 4 x 4000 .. 3800Test Case 2.3: BIOS TC on - CnQ on / Linux OnDemand - No BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 0 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... ondemand"cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 1800 - 3700 Observed "cpupower monitor" Freq range... 1800 - 3700 /sys/kernel/debug/dri/0/radeon_pm_info... power level 1Load | Core Freqs ---------------+----------- stress --cpu 1 | 1 x 3700 stress --cpu 2 | 2 x 3700 stress --cpu 3 | 3 x 3700 stress --cpu 4 | 4 x 3700Test Case 2.4: BIOS TC on - CnQ on / Linux Performance - No BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 0 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... performace"cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 3700 Observed "cpupower monitor" Freq range... 2000 - 3700 /sys/kernel/debug/dri/0/radeon_pm_info... power level 1Load | Core Freqs ---------------+----------- stress --cpu 1 | 1 x 3700 stress --cpu 2 | 2 x 3700 stress --cpu 3 | 3 x 3700 stress --cpu 4 | 4 x 3700Test Case 2.5: BIOS TC off - CnQ on / Linux OnDemand - BoostUEFI BIOS Turbo Core Setting............................ Disabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 1 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... ondemand "cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 1800 - 3700 Observed "cpupower monitor" Freq range... 1800 - 3700 /sys/kernel/debug/dri/0/radeon_pm_info... power level 0In other words, if Turbo Core is disabled in the BIOS, the patched radeon will not turn it on. Test Case 2.6: BIOS TC on - CnQ off / Linux n/aUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Disabled /sys/devices/system/cpu/cpufreq/boost................... n/a /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... n/a"cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 3700 Observed "cpupower monitor" Freq range... 2000 - 4300 /sys/kernel/debug/dri/0/radeon_pm_info... power level 0Load | Core Freqs ---------------+----------------- stress --cpu 1 | 1 x 4300 stress --cpu 2 | 2 x 4100 .. 4000 stress --cpu 3 | 3 x 4000 .. 3800 stress --cpu 4 | 4 x 3900 .. 3700With Cool'n'Qiet disabled, the Linux kernel will not offer any governor choice and (wrongly) assume that the cores run at a fixed frequency. Interestingly, the resulting Turbo Core frequencies are the worst of all tested combinations in Test Case Group 2. Test Case Group 2 Summary With the patched radeon driver, Turbo Core works. No instabilities (which are the reason why bapm aka Turbo Core is disabled there) have been seen so far.Test Case Group 3: Linux + fglrx (catalyst) I started with a fresh Ubuntu (14.04 Server, kernel 3.13) installation, which I see as comparable to the Arch Linux (installer 2014.08.01, kernel 3.15.7) due to the presence of acpi_cpufreq (kernel CPU scaling) and radeon (kernel GPU driver). The reason for switching to Ubuntu is the easy installation of fglrx. I validated the power consumption and behaviour with the fresh installation, which uses radeon. I installed fglrx from the command line (sudo apt-get install linux-headers-generic, sudo apt-get install fglrx) and rebooted the system. The change from radeon to fglrx is immediately obvious both regarding console appearance (fglrx: 128 x 48, radeon: much higher) and idle mode power consumption (fglrx: 40W, radeon: 30W). But Turbo Core works right away. Test Case 3.1: BIOS TC on - CnQ on / Linux OnDemand - BoostUEFI BIOS Turbo Core Setting............................ Enabled UEFI BIOS Cool'n'Quiet Setting.......................... Enabled /sys/devices/system/cpu/cpufreq/boost................... 1 /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor... ondemand "cpupower frequency-info" Pstates........ 4300 4200 3900 3700 3400 2700 2300 1800 Observed "/proc/cpuinfo" cpu MHz range... 1800 - 3700 Observed "cpupower monitor" Freq range... 1800 - 4300 /sys/kernel/debug/dri/0/radeon_pm_info... n/aLoad | Core Freqs ---------------+---------------------------- stress --cpu 1 | 1 x 4300 stress --cpu 2 | 2 x 4200 .. 3900 (core chg) stress --cpu 3 | 3 x 4100 .. 3700 stress --cpu 4 | 4 x 4000 .. 3600The fglrx behaviour is definitely interesting. When 'stress --cpu 2' was called for any of the tes cases in any test case group, the two loaded cores were always located on separate modules. But with fglrx, a sudden reallocation occured such that a single module was used (which saves quite some power see below). After some time, the loaded core moved back to the other module. This was not seen with radeon. Could it be that fglrx manipulates core affinity of processes? Test Case Group 3 Summary The advantage of fglrx is that it enables Turbo Core right away, without any need to patch it. Because fglrx wastes 10 to 12W for the GPU in our scenario on a chip with 65W TDP, the overall results regarding available core speeds are unimpressive. Therefore, no further tests were conducted. Also from an engineering point of view, the behaviour of fglrxappears to be a bit sad. Reallocating one of two busy cores to the other module to maintain a higher frequency may or may not be a good idea, because before that step, both cores had a L2 cache of their own, while afterwards, they have to share one. Whether fglrx considers any metrics (such as cache hit misses) to support its decision will have to be clarified separately, but there are other reports about its abrupt behaviour.Summary of Power Consumption Some of the delta values in the following table get slightly worse as the temperature rises; one might say the PWM controlled fan and the chip both play a role there.System @State / ->Transition Delta | System Power Dissipation -------------------------------------+------------------------- @BIOS | @ 95 .. 86W @Bootloader | @108 .. 89W @Ubuntu Installer Idle | @ 40W @Linux radeon Idle ondemand | @ 30W @Linux radeon Idle performance | @ 30W @Linux fglrx Idle ondemand | @ 40W 1 Module 1800 -> 3700 | + 13W 1 Module 1800 -> 4300 | + 25W 1 Core 1800 -> 3700 | + 5W 1 Core 1800 -> 4300 | + 10W "radeon" Video Out -> Disable | - 2W 'fglrx" Video Out -> Darken | +- 0W @Linux radeon Maximum | @103 .. 89W @Linux fglrx Maximum | @105 .. 92WThere seems to be more to Turbo Core (at least with Richland APUs) than expected: There is no noticeable difference in power dissipation when the "ondemand" scaling governor is in place as compared to when the "performance" governor is in place. Althouth /proc/cpuinfo will always report 37000MHz under the performance governor, cpupower monitor will reveal that the cores actually do slow down. In some cases, frequencies as low as 2000MHz were shown; it's possible that 1800MHz will internally be used as well. The A10-6700 consists of two modules with two cores each. If e.g. two cores are idle and two cores are busy and get accelerated, the system behaviour will be different depending on whether the busy cores are located on the same module or not.Accelerating a module is more energy-intensive than accelerating a core. The L2 cache is assigned per module.The difference between the power dissipation of two cores accelerating on the same module vs on different modules was determined by replacing stress --cpu 2 (which always led to a distribution amongst the two modules) by taskset -c 0 stress --cpu 1andtaskset -c 1 stress --cpu 1. The A10-6700 seems to have a total power dissipation limit for the APU (92W together with the other components) with tiny bit reserved for the GPU alone (3 W). With radeon, it will allow for more for a short period and reduce to the maximum very smoothly, while with fglrx, it has been observed that these limits are exceeded more significantly and power dissipation is then reduced abruptly. While many people claim that the delay in Kaveri availability is intended by AMD because it would kill their current APUs, I beg to differ. The Richland A10 has demonstrated an excellent power management, and the Kaveri cannot compete with its low idle state power consumption (Kaveri's chip complexity is almost twice that of Richland's, so it will take another one or two development steps).Overall ConclusionIncluding temperature in the Turbo Core logic (as is reported for the Trinity -> Richland step) seems to make sense and appears to work well, as can be seen by the reduction in pwoer dissipation in BIOS and Bootloader over time. For the cosole/server scenario, the A10-6700 supports 4 cores @ 3700MHz (3800MHz with Turbo Core) over the long term, at least with the radeon driver. There's probably not much chance to maintain this performance level when the GPU gets some work to do. It would seem that the 65W TDP can be permanently exceeded slightly under full load, but it's hard to tell as the power supply has a lower efficiency at 30W. Since there are clear indications that the temperature is considered (a peak power dissipation of almost 110W was observed before it started to be reduced to 90W, and also 2 cores at 4300 MHz were reported for some time), investing in APU cooling may be a good idea. However, mainboards limited to 65W TDP will only be able to supply so much current, so there will definitely be a hard limit imposed by the APU. If you intend to use a Richland APU for computing under Linux, you definitely want to use a patched radeon driver (if you do not encounter instabilities -- specifically in conjunction with the enabling of Dynamic Power Management). Otherwise, you'll not get full value. Oddly enough, it seems that the best setup would be to enable both Turbo Core and Cool'n'Quiet in the BIOS but then choose the performance scaling governor -- at least if your APU behaves like the one tested here. You'll have the same power consumption as with ondemand but faster frequency scaling and less kernel overhead to make the scaling decision.Acknowledgements Special thanks goes to Alex Deucher, who significantly pushed me into the right direction over at bugzilla.kernel.org. I am impressed by the quality of the free radeon driver and would like to thank the whole team for maintaining this piece of software, which appears to be thoughtfully engineered. If radeon would not behave as it does, my decision in favour of the A10-6700 would have been substantially wrong.
My objective is to set up a mini server (not HTPC) with low power consumption in idle mode, yet offering nice performance when used. The focus is more on data safety than availability. In other words: quality parts, but redundancy only for storage. Not considering myself to be biased, after some research I felt that some AMD desktop APUs would offer good value. Questions remaining are:Will the idle state of the GPU lower power consumption and unleash resources for the CPU? Will Cool'n'Quiet and Turbo Core lead to the intended low power consumption in idle mode but performance under load? Will Linux support this scenario as intended? Quite a few questions and forum discussions seem to suggest that this is not necessarily the case.
How to set up Linux for full AMD APU power management support: Turbo Core, Cool'n'Quiet, Dynamic Power Management?
There is a hack solution via shellscript: https://github.com/Sepero/temp-throttle/ #!/bin/bash# Usage: temp_throttle.sh max_temp # USE CELSIUS TEMPERATURES. # version 2.20cat << EOF Author: Sepero 2016 (sepero 111 @ gmx . com) URL: http://github.com/Sepero/temp-throttle/ EOF# Additional Links # http://seperohacker.blogspot.com/2012/10/linux-keep-your-cpu-cool-with-frequency.html# Additional Credits # Wolfgang Ocker <weo AT weo1 DOT de> - Patch for unspecified cpu frequencies.# License: GNU GPL 2.0# Generic function for printing an error and exiting. err_exit () { echo "" echo "Error: $@" 1>&2 exit 128 }if [ $# -ne 1 ]; then # If temperature wasn't given, then print a message and exit. echo "Please supply a maximum desired temperature in Celsius." 1>&2 echo "For example: ${0} 60" 1>&2 exit 2 else #Set the first argument as the maximum desired temperature. MAX_TEMP=$1 fi### START Initialize Global variables.# The frequency will increase when low temperature is reached. LOW_TEMP=$((MAX_TEMP - 5))CORES=$(nproc) # Get number of CPU cores. echo -e "Number of CPU cores detected: $CORES\n" CORES=$((CORES - 1)) # Subtract 1 from $CORES for easier counting later.# Temperatures internally are calculated to the thousandth. MAX_TEMP=${MAX_TEMP}000 LOW_TEMP=${LOW_TEMP}000FREQ_FILE="/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies" FREQ_MIN="/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq" FREQ_MAX="/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq"# Store available cpu frequencies in a space separated string FREQ_LIST. if [ -f $FREQ_FILE ]; then # If $FREQ_FILE exists, get frequencies from it. FREQ_LIST=$(cat $FREQ_FILE) || err_exit "Could not read available cpu frequencies from file $FREQ_FILE" elif [ -f $FREQ_MIN -a -f $FREQ_MAX ]; then # Else if $FREQ_MIN and $FREQ_MAX exist, generate a list of frequencies between them. FREQ_LIST=$(seq $(cat $FREQ_MAX) -100000 $(cat $FREQ_MIN)) || err_exit "Could not compute available cpu frequencies" else err_exit "Could not determine available cpu frequencies" fiFREQ_LIST_LEN=$(echo $FREQ_LIST | wc -w)# CURRENT_FREQ will save the index of the currently used frequency in FREQ_LIST. CURRENT_FREQ=2# This is a list of possible locations to read the current system temperature. TEMPERATURE_FILES=" /sys/class/thermal/thermal_zone0/temp /sys/class/thermal/thermal_zone1/temp /sys/class/thermal/thermal_zone2/temp /sys/class/hwmon/hwmon0/temp1_input /sys/class/hwmon/hwmon1/temp1_input /sys/class/hwmon/hwmon2/temp1_input /sys/class/hwmon/hwmon0/device/temp1_input /sys/class/hwmon/hwmon1/device/temp1_input /sys/class/hwmon/hwmon2/device/temp1_input null "# Store the first temperature location that exists in the variable TEMP_FILE. # The location stored in $TEMP_FILE will be used for temperature readings. for file in $TEMPERATURE_FILES; do TEMP_FILE=$file [ -f $TEMP_FILE ] && break done[ $TEMP_FILE == "null" ] && err_exit "The location for temperature reading was not found."### END Initialize Global variables.### START define script functions.# Set the maximum frequency for all cpu cores. set_freq () { # From the string FREQ_LIST, we choose the item at index CURRENT_FREQ. FREQ_TO_SET=$(echo $FREQ_LIST | cut -d " " -f $CURRENT_FREQ) echo $FREQ_TO_SET for i in $(seq 0 $CORES); do # Try to set core frequency by writing to /sys/devices. { echo $FREQ_TO_SET 2> /dev/null > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq; } || # Else, try to set core frequency using command cpufreq-set. { cpufreq-set -c $i --max $FREQ_TO_SET > /dev/null; } || # Else, return error message. { err_exit "Failed to set frequency CPU core$i. Run script as Root user. Some systems may require to install the package cpufrequtils."; } done }# Will reduce the frequency of cpus if possible. throttle () { if [ $CURRENT_FREQ -lt $FREQ_LIST_LEN ]; then CURRENT_FREQ=$((CURRENT_FREQ + 1)) echo -n "throttle " set_freq $CURRENT_FREQ fi }# Will increase the frequency of cpus if possible. unthrottle () { if [ $CURRENT_FREQ -ne 1 ]; then CURRENT_FREQ=$((CURRENT_FREQ - 1)) echo -n "unthrottle " set_freq $CURRENT_FREQ fi }get_temp () { # Get the system temperature. TEMP=$(cat $TEMP_FILE) }### END define script functions.echo "Initialize to max CPU frequency" unthrottle# Main loop while true; do get_temp # Gets the current temperature and set it to the variable TEMP. if [ $TEMP -gt $MAX_TEMP ]; then # Throttle if too hot. throttle elif [ $TEMP -le $LOW_TEMP ]; then # Unthrottle if cool. unthrottle fi sleep 3 # The amount of time between checking temperatures. done
According to sensors, the critical temperature for my CPU cores is at 100°C. When using my laptop it never goes above 95°C (So either my sensor is defect or thermal throttling is set to a lower value for some reason, but this doesn't really matter). I have an Intel i7 and thermald.service is up and running and I'm on Arch Linux. But 95°C is way too hot and I'd like to lower that value. I'd like to have thermal throttling at 75 or 80°C. I thought this would be simple, but apparently there is little information on Google and the configuration of thermald lacks documentation. I tried dbus-send --system --dest=org.freedesktop.thermald /org/freedesktop/thermald org.freedesktop.thermald.SetUserPassiveTemperature string:cpu uint32:80000as the manpage suggests, but running stress still got the temperature up to 95. So how do I lower the value at which thermal throttling happens?
Set critical CPU temperature for thermal throttling
The solution turned out to be to pass intel_pstate=passive to kernel. Then intel_pstate relinquishes control back to CPUFreq. The latter still uses intel_pstate to govern the CPU, but intel_pstate has no say in what to do. After that, you can finally set performance policies. Your laptop can be either completely quiet or you can make it very noisy, but powerful. When intel_pstate is active, the machine is neither quiet nor well performing, but rather always slow and noisy. Two years later update It turned out there is more to the story. The computer in question was a Gigabyte laptop. When I got a new laptop, also from Gigabyte, a trick with intel_pstate=passive didn't help. I started digging deeper and found out that (i) Gigabyte's firmware limits the performance if the laptop is running on anything but Windows. ACPI knows the OS it is running on via _OSI. Not only that, but you see, usually laptop manufacturers contribute to the Linux kernel and write a small driver, specific to the vendor, that helps to monitor the system and manage the performance. If you look in the kernel source code, you will find a lot of them in drivers/platform/x86, for Dell, HP, ASUS, Lenovo, Fujitsu... Well, (ii) Gigabyte has done nothing on this front. The first problem can be solved by passing acpi_os_name="Windows 2015" to the kernel. The performance will become better. However, to really make a Gigabyte laptop usable on Linux, somebody has to write a kernel driver for it. There is a user-space workaround, used in this project, that utilizes a debug feature of the kernel and writes value directly to the embedded controller registers. It's dangerous and requires undocumented information about the EC. What one should do instead is to call WMI's ACPI methods from the firmware.
I spent hours searching for an answer in Internet. All I could find doesn't help. I have Intel i9-9980HK, running under Ubuntu 20.04, kernel 5.4.0-33. The problem is that under the full load the CPU lowers the frequency to 2.7 GHZ, I guess in order to stay under low power budget. Whatever I try I can't make it run faster. It stays under 65C, quietly and slowly crunching numbers. For comparison the same machine under Windows runs from 3 to 4+ GHz under the full load. What I tried:Change the governor to performance. No effect. Set /sys/devices/system/cpu/cpufreq/policyX/energy_performance_preference to performance. No effect. sudo service thermald stop. No effect. Increase /sys/devices/system/cpu/intel_pstate/turbo_pct. Access denied even for root. Increase /sys/devices/system/cpu/cpufreq/policyX/scaling_min_freq. No effect.I am lost. What does it want? Btw, /sys/devices/system/cpu/intel_pstate/status is active. Update. I think I know the reason. When intel_pstate is active, it ignores all the settings (like governor, everything under /sys/devices/system/cpu/cpufreq). Tools like cpupower cannot control intel_pstate. So the question pretty much boils down to how control intel_pstate driver.
Set CPU to high performance
I have found the solution thanks to the tip given by Nils and a nice article. Tuning the ondemand CPU DVFS governor The ondemand governor has a set of parameters to control when it is kicking the dynamic frequency scaling (or DVFS for dynamic voltage and frequency scaling). Those parameters are located under the sysfs tree: /sys/devices/system/cpu/cpufreq/ondemand/ One of this parameters is up_threshold which like the name suggest is a threshold (unit is % of CPU, I haven't find out though if this is per core or merged cores) above which the ondemand governor kicks in and start changing dynamically the frequency. To change it to 50% (for example) using sudo is simple: sudo bash -c "echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold" If you are root, an even simpler command is possible: echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold Note: those changes will be lost after the next host reboot. You should add them to a configuration file that is read during boot, like /etc/init.d/rc.local on Ubuntu. I have found out that my guest VM, although consuming a lot of CPU (80-140%) on the host was distributing the load on both cores, so no single core was above 95%, thus the CPU, to my exasperation, was staying at 800 MHz. Now with the above patch, the CPU dynamically changes it frequency per core much faster, which suits better my needs, 50% seems a better threshold for my guest usage, your mileage may vary. Optionally, verify if you are using HPET It is possible that some applicable which incorrectly implement timers might get affected by DVFS. This can be a problem on the host and/or guest environment, though the host can have some convoluted algorithm to try to minimise this. However, modern CPU have newer TSC (Time Stamp Counter) which are independent of the current CPU/core frequency, those are: constant (constant_tsc), invariant (invariant_tsc) or non-stop (nonstop_tsc), see this Chromium article about TSC resynchronisation for more information on each. So if your CPU is equipped with one of this TSC, you don't need to force HPET. To verify if your host CPU supports them, use a similar command (change the grep parameter to the corresponding CPU feature, here we test for the constant TSC): $ grep constant_tsc /proc/cpuinfoIf you do not have one of this modern TSC, you should either:Active HPET, this is described here after; Not use CPU DVFS if you have any applications in the VM that rely on precise timing, which is the one recommended by Red Hat.A safe solution is to enable HPET timers (see below for more details), they are slower to query than TSC ones (TSC are in the CPU, vs. HPET are in the motherboard) and perhaps not has precise (HPET >10MHz; TSC often the max CPU clock) but they are much more reliable especially in a DVFS configuration where each core could have a different frequency. Linux is clever enough to use the best available timer, it will rely on first the TSC, but if found too unreliable, it will use the HPET one. This work good on host (bare metal) systems, but due to not all information properly exported by the hypervisor, this is more of a challenge for the guest VM to detect badly behaving TSC. The trick is then to force to use HPET in the guest, although you would need the hypervisor to make this clock source available to the guests! Below you can find how to configure and/or enable HPET on Linux and FreeBSD. Linux HPET configuration HPET, or high-precision event timer, is a hardware timer that you can find in most commodity PC since 2005. This timer can be used efficiently by modern OS (Linux kernel supports it since 2.6, stable support on FreeBSD since latest 9.x but was introduced in 6.3) to provide consistent timing invariably to CPU power management. It allows to build also easier tick-less scheduler implementations. Basically HPET is like a safety barrier which even if the host has DVFS active, the host and guest timing events will be less affected. There is a good article from IBM regarding enabling HPET, it explains how to verify which hardware timer your kernel is using, and which are available. I provide here a brief summary: Checking the available hardware timer(s): cat /sys/devices/system/clocksource/clocksource0/available_clocksource Checking the current active timer: cat /sys/devices/system/clocksource/clocksource0/current_clocksource Simpler way to force usage of HPET if you have it available is to modify your boot loader to ask to enable it (since kernel 2.6.16). This configuration is distribution dependant, so please refer to your own distribution documentation to set it properly. You should enable hpet=enable or clocksource=hpet on the kernel boot line (this again depends on the kernel version or distribution, I did not find any coherent information). This make sure that the guest is using the HPET timer. Note: on my kernel 3.5, Linux seems to pick-up automatically the hpet timer. FreeBSD guest HPET configuration On FreeBSD one can check which timers are available by running: sysctl kern.timecounter.choice The currently chosen timer can be verified with: sysctl kern.timecounter.hardware FreeBSD 9.1 seems to automatically prefer HPET over other timer provider. Todo: how to force HPET on FreeBSD. Hypervisor HPET export KVM seems to export HPET automatically when the host has support for it. However, for Linux guest they will prefer the other automatically exported clock which is kvm-clock (a paravirtualised version of the host TSC). Some people reports trouble with the preferred clock, your mileage may vary. If you want to force HPET in the guest, refer to the above section. VirtualBox does not export the HPET clock to the guest by default, and there is no option to do so in the GUI. You need to use the command line and make sure the VM is powered off. the command is: ./VBoxManage modifyvm "VM NAME" --hpet onIf the guest keeps on selecting another source than HPET after the above change, please refer to the above section how to force the kernel to use HPET clock as a source.
Observation: I have an HP server with an AMD dual core CPU (Turion II Neo N40L) which can scale frequencies from 800 to 1500 MHz. The frequency scaling works under FreeBSD 9 and under Ubuntu 12.04 with the Linux kernel 3.5. However, when I put FreeBSD 9 in a KVM environment on top of Ubuntu the frequency scaling does not work. The guest (thus FreeBSD) does not detect the minimum and maximum frequencies and thus does not scale anything when CPU occupation gets higher. On the host (thus Ubuntu) the KVM process uses between 80 and 140 % of the CPU resource but no frequency scaling happens, the frequency stays at 800 MHz, although when I run any other process on the same Ubuntu box, the ondemand governor quickly scales the frequency to 1500 MHz! Concern and question: I don't understand how the CPU is perhaps virtualised, and if it is up to the guest to perform the proper scaling. Does it require some CPU features to be exposed to the guest for this to work? Apendix: The following Red Hat release note tends to suggest that frequency scaling out to work even in a virtualised environment (see chapter 6.2.2 and 6.2.3), thought the note fails to address which virtualisation technology this work with (kvm, xen, etc.?) For information, the cpufreq-info output on Ubuntu is: $ cpufreq-info cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [emailprotected], please. analyzing CPU 0: driver: powernow-k8 CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 8.0 us. hardware limits: 800 MHz - 1.50 GHz available frequency steps: 1.50 GHz, 1.30 GHz, 1000 MHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.50 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 1.50 GHz:14.79%, 1.30 GHz:1.07%, 1000 MHz:0.71%, 800 MHz:83.43% (277433) analyzing CPU 1: driver: powernow-k8 CPUs which run at the same hardware frequency: 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 8.0 us. hardware limits: 800 MHz - 1.50 GHz available frequency steps: 1.50 GHz, 1.30 GHz, 1000 MHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.50 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 1.50 GHz:14.56%, 1.30 GHz:1.06%, 1000 MHz:0.79%, 800 MHz:83.59% (384089)The reason I want this feature to work is: save energy, run quieter (less hot) and also simple curiosity to understand better why this is not working and how to make it work.
Host CPU does not scale frequency when KVM guest needs it
both clock run at same frequency? Usually there are two clocks inside a computer/device/system. One is powered from a battery (usually a CR2032, could be the main battery or even a supercap in an embedded system) and runs from an dedicated chip. The other one is driven by the CPU clock source (with its own quartz crystal). One usually runs from a 32.768kHz crystal. The other one from a CPU crystal Mhz or GHz range. There is a lot of variation as there are a lot of CPU models.both are independent of each other? Yes, most of the time. But one could adjust the other (on embedded linux you typically have the hwclock command with options -r or -w). The CPU clock is set by the chip clock on boot (the CPU has no idea of what time it is when booting). For a system in a network, the CPU clock might find better time value from the network via the NTP (Network Time Protocol) and then adjust or correct the value inside the clock chip.what happens when Real time clock fails does it affect system clock? Yes, sure, if the battery runs out, for example, the computer boots up with a completely out of wack idea of real time But, nowadays, most of the systems have some network connectivity and update their concept of real time pretty soon after boot via the NTP protocol.can anyone let me know the difference between both the clocks. As said above, one clock source is a chip, the other is the CPU.Note that I have avoided calling the chip clock the RTC clock as there are internal values on the CPU also called RTC. But yes that is the common name for it. Related:Real-Time -Clock Kernel reference Red-Hat reference
The system clock is maintained by the kernel, whereas the hardware clock is maintained by the Real Time Clock (RTC).Do both clock run at same frequency? Are both independent of each other? What happens when Real time clock fails? Does it affect the system clock?Can anyone let me know the difference between both the clocks.
System Clock vs. Hardware Clock (RTC) in embedded systems
As it turned out it was a thermal issue, but not software related. After sending the device back to the factory, they replaced the cooler of the device and the issue was fixed! Apparently the CPU got up to just under 100 degrees Celsius, and then immediately thermal throttled.
This problem is bothering me for some weeks now and I cannot seem to figure out what the real problem might be. The problem is that the CPU frequency is dropping drastically when under load. By this I mean that the CPU frequency is around 400 MHz when just opening a web browser for example, and when there is no load the frequency is rising back up. (not to a very high one, but still it is not a static behavior). It is really driving me crazy. Some further information that might help: Hardware: Lenovo thinkpad T15: CPU: Intel I7-10510U => Base clock: 1.8GHz => Boost clock: 4.9GHz Software: Distro: Ubuntu 20.04.1 LTS Kernel: 5.4.0-52-generic ⇒ cpupower frequency-info analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: Cannot determine or is not supported. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency: Unable to call hardware current CPU frequency: 1.24 GHz (asserted by call to kernel) boost state support: Supported: yes Active: yes# command to simulate a stress on the CPU ⇒ stress-ng --cpu 8 --timeout 15s stress-ng: info: [43652] dispatching hogs: 8 cpu stress-ng: info: [43652] successful run completed in 15.34s# The result of the stress on the CPU ⇒ sudo turbostat --Summary --quiet --show Busy%,Bzy_MHz,PkgTmp,PkgWatt,GFXWatt,IRQ --interval 6 Busy% Bzy_MHz IRQ PkgTmp PkgWatt GFXWatt 6.58 1862 11418 51 5.00 0.00 7.69 1813 14444 51 4.96 0.00 7.79 1817 16988 51 5.03 0.00 7.99 1724 14679 51 5.00 0.00 9.12 1542 14504 51 4.91 0.00 8.82 1662 13878 51 4.98 0.00 60.61 1060 19508 52 5.84 0.00 # Applied load around here 99.75 460 19984 51 4.59 0.00 98.06 654 21316 51 4.79 0.00 10.26 1181 16730 51 4.25 0.00 # load ended around here 5.90 1782 10315 50 4.74 0.00 6.60 1890 11701 50 5.10 0.00 6.00 1901 10736 50 5.13 0.00 6.74 1981 13477 51 5.23 0.00 7.43 1731 1500 50 4.92 0.00⇒ cpufreq-info cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [emailprotected], please. analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 1.26 GHz. analyzing CPU 1: driver: intel_pstate CPUs which run at the same hardware frequency: 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 1.48 GHz. analyzing CPU 2: driver: intel_pstate CPUs which run at the same hardware frequency: 2 CPUs which need to have their frequency coordinated by software: 2 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 982 MHz. analyzing CPU 3: driver: intel_pstate CPUs which run at the same hardware frequency: 3 CPUs which need to have their frequency coordinated by software: 3 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 983 MHz. analyzing CPU 4: driver: intel_pstate CPUs which run at the same hardware frequency: 4 CPUs which need to have their frequency coordinated by software: 4 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 1.06 GHz. analyzing CPU 5: driver: intel_pstate CPUs which run at the same hardware frequency: 5 CPUs which need to have their frequency coordinated by software: 5 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 872 MHz. analyzing CPU 6: driver: intel_pstate CPUs which run at the same hardware frequency: 6 CPUs which need to have their frequency coordinated by software: 6 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 926 MHz. analyzing CPU 7: driver: intel_pstate CPUs which run at the same hardware frequency: 7 CPUs which need to have their frequency coordinated by software: 7 maximum transition latency: 4294.55 ms. hardware limits: 400 MHz - 4.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 400 MHz and 4.90 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 850 MHz.Things I've tried so far:setting power governor to performance setting the intel pstate driver frequency limits using cpupower to set the frequency limits (Does this the same as changing the intel pstate driver values directly?) Reinstalling ubuntu 20.04, had the same issue upon a clean install.I was able once to get my system up and running like it should be: After rebooting from windows, the CPU went right up to the max CPU limit when running a fake load onto the system and kept working for the rest of the day. The system thermal throttled as expected, but never dropped below the 2 GHz as far as I could tell. However after rebooting the issue reappeared... I wasn't able to reproduce this behavior afterwards either... If it was not clear already: the question is how to solve this so that I can use the full potential of my laptop and not wait every time I load a new window or open a new browser tab? Thanks in advance! EDIT:add reinstall to things I tried
CPU Frequency drops under load without thermal issues
This is not yet close to be a definite answer. Instead, it's a set of suggestions too long to fit in comments. I'm afraid you might slightly misinterpret the meanings of sysfs cpufreq parameters. For instance, on my Core Duo laptop, the related_cpu parameters for both cores read 0 1 - which, according to your interpretation, would mean that the cores cannot switch frequencies independently. But that is not the case - I can set each frequency at will. By contrast, the affected_cpus parameter for each core lists only the respective CPU number. You might want to take a look at kernel documentation for cpu-freq to get a better understanding of the parameters such as affected_cpus,related_cpus,scaling_* and cpuinfo_*. The documentation is normally distributed with kernel source packages. Specifically, I recommend reading <kernel-sources-dir>/Documentation/cpu-freq/user-guide.txt, where <kernel-sources-dir> would typically stand for /usr/src/linux or /usr/src/linux-<kernel-version>. (However, when I skim through the documentation myself now, I confess I don't catch some of the frequency-scaling-related nuances. To fully understand these, one probably needs to gain a solid understanding of CPU architectures first.) Back to your question. And one more test case on my part: when I change the value of scaling_max_freq (with either userspace or performance governor being used), the core's clock automatically switches to that new maximum. The different behaviour you're observing might be any of:specific to hardware implementation of frequency scaling mechanisms on your CPU, due to differences between the standard cpufreq module and phc-intel which I'm using, normal behaviour (call it a bug or a feature if you will) of cpufreq module, which has changed at some point since 2.6.35 (my current kernel version is 3.6.2), result of a bug in cpufreq implementation for your CPU (or entire family), specific to the implementation of performance CPU governor as of 2.6.35.Some of the things you might do to push your investigation further:read the user-guide.txt and fiddle more with other cpufreq parameters, repeat the tests while running a newer kernel - the easiest way is to boot a liveCD/DVD/USB.If you continue to experience unexpected behaviour and gain more reasons to believe it is due to a bug (definitely must check with the latest minor kernel version), go ahead and report this on kernel.org bugzilla.
I see the cores on an Intel i5 machine I'm looking at can only be run at the same clockspeed: /sys/devices/system/cpu/cpu1/cpufreq/related_cpus lists all of the CPUs. Setting cpu1's clockspeed changes cpu0's, as expected. Supposedly the AMD A6-4400M machine I'm running should be able to run each core at a different clockspeed:/sys/devices/system/cpu/cpu1/cpufreq/related_cpu only lists cpu1. When I set cpu1's clockspeed by using the performance governor and echoing 1400000 to scaling_max_freq, cpu0's clockspeed remains at 2700000 as expected. Cpu1's scaling_cur_freq reads 1400000 as expected. However, cpu1's cpuinfo_cur_freq reads 2700000. From benchmarking, it appears CPU1 is indeed still running at 2.7 GHz. Am I missing something, or is something broken? I'm running Linux 2.6.35, and passing idle=mwait in the kernel command line.
Can I run multiple cores at different clock speeds?
For 3.x kernels The interface to CPUFreq has changed in the newer kernels. This would include CentOS 6. You can read about the entire interface here in the Red Hat Enterprise Linux (RHEL) documentation titled: Chapter 3. Core Infrastructure and Mechanics. Specifically the section on CPUFreq Setup. Here are the steps required to set it up. CPUFreq drivers $ ls -1 /lib/modules/`uname -r`/kernel/arch/x86/kernel/cpu/cpufreq/ acpi-cpufreq.ko mperf.ko p4-clockmod.ko pcc-cpufreq.ko powernow-k8.ko speedstep-lib.koload appropriate driver $ modprobe acpi-cpufreqinstall cpupower tool $ yum install cpupowerutilsview governors $ cpupower frequency-info --governors analyzing CPU 0: ondemand userspace performanceSo we currently only have these 3 governors loaded: ondemand, userspace, and performance. loading governors that are missing You can get a list of all governors that are available like so. $ ls -1 /lib/modules/`uname -r`/kernel/drivers/cpufreq/ cpufreq_conservative.ko cpufreq_ondemand.ko cpufreq_powersave.ko cpufreq_stats.ko freq_table.ko$ modprobe cpufreq_powersaveconfirm modules thus far: $ lsmod |grep cpuf cpufreq_powersave 1196 0 cpufreq_ondemand 10544 8 acpi_cpufreq 7763 0 freq_table 4936 2 cpufreq_ondemand,acpi_cpufreq mperf 1557 1 acpi_cpufreqconfirm which governors are loaded $ cpupower frequency-info --governors analyzing CPU 0: powersave ondemand userspace performanceviewing current policy $ cpupower frequency-info analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 2 3 4 5 6 7 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 1.60 GHz - 3.20 GHz available frequency steps: 3.20 GHz, 3.20 GHz, 3.07 GHz, 2.93 GHz, 2.80 GHz, 2.67 GHz, 2.53 GHz, 2.40 GHz, 2.27 GHz, 2.13 GHz, 2.00 GHz, 1.87 GHz, 1.73 GHz, 1.60 GHz available cpufreq governors: powersave, ondemand, userspace, performance current policy: frequency should be within 1.60 GHz and 3.20 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 1.60 GHz (asserted by call to hardware). boost state support: Supported: yes Active: yes 2500 MHz max turbo 4 active cores 2500 MHz max turbo 3 active cores 2500 MHz max turbo 2 active cores 2600 MHz max turbo 1 active coresIn the above output you can see my current policy is ondemand. To tune the policy and speed you use this command to do so: $ cpupower frequency-set --governor performance Setting cpu: 0 Setting cpu: 1 Setting cpu: 2 Setting cpu: 3 Setting cpu: 4 Setting cpu: 5 Setting cpu: 6 Setting cpu: 7confirm new governor $ cpupower frequency-info analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 2 3 4 5 6 7 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 1.60 GHz - 3.20 GHz available frequency steps: 3.20 GHz, 3.20 GHz, 3.07 GHz, 2.93 GHz, 2.80 GHz, 2.67 GHz, 2.53 GHz, 2.40 GHz, 2.27 GHz, 2.13 GHz, 2.00 GHz, 1.87 GHz, 1.73 GHz, 1.60 GHz available cpufreq governors: powersave, ondemand, userspace, performance current policy: frequency should be within 1.60 GHz and 3.20 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 3.20 GHz (asserted by call to hardware). boost state support: Supported: yes Active: yes 2500 MHz max turbo 4 active cores 2500 MHz max turbo 3 active cores 2500 MHz max turbo 2 active cores 2600 MHz max turbo 1 active coresYou can also tune the min/max CPU frequencies within a policy using the cpupower frequency-set --min <freq> --max <freq>. See this page for more details on what you can do with cpupower frequency-set. doing the above without cpupowerutils So finally, if you don't have the cpupowerutils package installed, you can interact with it similar to how you did in the previous 2.6 kernels. Mainly you echo values into the sysfs filesystem. for example $ echo 360000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freqFor 2.6 kernels You can read about the various cpufreq features over on this site. excerpt from CPU frequency scaling in Linux with cpufreqignore_nice_load - This parameter takes a value of '0' or '1'. When set to '0' (its default), all processes are counted towards the 'cpu utilization' value. When set to '1', the processes that are run with a 'nice' value will not count (and thus be ignored) in the overall usage calculation. This is useful if you are running a CPU intensive calculation on your laptop that you do not care how long it takes to complete as you can 'nice' it and prevent it from taking part in the deciding process of whether to increase your CPU frequency. To turn this on do the following.sudo sh -c "echo 1 > /sys/devices/system/cpu/cpu0/cpufreq/ondemand/ignore_nice_load"I'd put a 0 in this file since this should be the default. If you have any long running niced process, which I highly doubt, you can set it to 1.
I changed my CentOS 6 CPU governor from ondemand (the default one) to conservative and got this after restarting the cpufreq service:/etc/rc5.d/S13cpuspeed: line 88: /sys/devices/system/cpu/cpufreq/conservative/ignore_nice_load: File or directory does not existSo what should I do? Should I create the file and if so, what should I put there?
CentOS conservative governor, nice error
In both cases, your CPU can run slightly faster than its specified frequency, typically when one of its cores is running a CPU-intensive process, and the others aren’t. On your Core 2 Mobile system, this was provided by Intel Dynamic Acceleration; on the Core i7, by Turbo Boost. The exact details vary from one processor to another. Earlier CPUs could only boost one core, but nowadays multiple cores can be boosted. The CPU ensures it stays within a certain power and thermal envelope.
How can it be, that my current cpu frequency (CPU MHz) for my Intel Core2 Duo T9400M is above max MHz while being on high load? ➜ lscpu [...] Model name: Intel(R) Core(TM)2 Duo CPU T9400M @ 2.53GHz Stepping: 10 CPU MHz: 2606.581 CPU max MHz: 2534.0000 CPU min MHz: 800.0000 [...]This isn't limited to lscpu: I get similar values out of /proc/cpuinfo: ➜ bat /proc/cpuinfo [...] model name : Intel(R) Core(TM)2 Duo CPU T9400 @ 2.53GHz cpu MHz : 2635.237 [...]I found this out while looking over the Node.js documentation and finding, that the current speed value of os.cpus() - even in the example of the documentation - is above the maximum CPU speed according to the model: [ { model: 'Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz', speed: 2926, [...]
How can my cpu frequency be above maximum MHz value in lscpu?
You probably have a real time application that is consuming all cpu (some bad implementation) and because of its realtime scheduling priority the system doesn't have enough resources available for other tasks. I suggests that you remove realtime priority from your applications and check which one is consuming a lot of CPU and, after correcting the problem, puts it back to realtime priority
My board continues to display the message below. The terminal does not have any input. What is it with the following message, which I know? (T, g, c, q ...) What is the cause of this phenomenon? How can I fix this phenomenon? INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=3936547 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=3972552 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4008557 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4044562 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4080567 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=4116572 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4152577 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 0, t=4188582 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4224587 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4260592 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 1, t=4296597 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4332602 jiffies, g=367023708, c=367023707, q=1511) INFO: rcu_preempt detected stalls on CPUs/tasks: { 3} (detected by 2, t=4368607 jiffies, g=367023708, c=367023707, q=1511)
"rcu_preempt detected stalls on CPUs / tasks" message appears to continue
You can use the ondemand cpu-freq governor, as long as you set the ignore_nice_load parameter to 1. From Documentation/cpu-freq/governors.txt, ondemand section:ignore_nice_load: this parameter takes a value of '0' or '1'. When set to '0' (its default), all processes are counted towards the 'cpu utilisation' value. When set to '1', the processes that are run with a 'nice' value will not count (and thus be ignored) in the overall usage calculation. This is useful if you are running a CPU intensive calculation on your laptop that you do not care how long it takes to complete as you can 'nice' it and prevent it from taking part in the deciding process of whether to increase your CPU frequency.
In some devices the cpu speed is dynamic, being faster when there is more load. I was wondering if it is possible to set nice level or priority of a process so that it does not influence an increase in cpu speed when it is running flat out. i.e. Process is running flat out, but only using spare cpu cycles as low priority. But also not causing an increase in cpu speed. When cpu is off process stops. When cpu is slow process may have some cpu, maybe most of it. When cpu is fast, because another process is running at 90%, process gets the remaining 10% of fast cpu. Then other process stops, so low priority process gets 100% of cpu, but the frequency controller does not see this low priority process and drops the frequency.
Process priority and cpu speed
That's the current CPU frequency; it can be scaled up and down. Have a look in /sys/devices/system/cpu/cpu0 (or 1, 2, 3), then the cpufreq directory. Check cat scaling_governor. It is probably ondemand (I believe that's the default kernel configuration). Now check scaling_available_frequencies; you'll see a list that for you should start with 2600000. The kernel will boost the frequency when required. Try a busy loop, with bash: while (( 1 )); do echo busy; doneLet that go and check your frequencies. They should go up. If you have a CPU monitor and one of them hits close to 100%, that core will probably be at the max frequency now.
I use centos 6.4 64. I have old proc - CPU AMD Phenom II X4 810 (HDX810W) 2.6 GHz. However, when I execute the command cat /proc/cpuinfo I get the following: processor : 0 vendor_id : AuthenticAMD cpu family : 16 model : 4 model name : AMD Phenom(tm) II X4 810 Processor stepping : 2 cpu MHz : 800.000 cache size : 512 KB physical id : 0 ...All four cores have the same speed 800 MHz. How to explain it?
CPU speed and cat /proc/cpuinfo
This is normal behavior. This behavior is part of the system attempting to conserve power by constantly adjusting the system's CPU speed. Take a look at my answer on this other U&L Q&A titled: How does CPU frequency work in conky?.Conky CPU infoWhat's going on? This feature is called a governor, and the system has various power profiles that it can follow. You're likely using the "powersave" one which will attempt to drive your CPU's speed down to a lower value when there isn't any load on your system. It may seem annoying but it's actually a good thing, it's save a fair amount of power when running Linux on a laptop. Even on desktops that are mostly idle, it can save a fair amount of power over the life of your system, especially if you tend to leave it on most of the time. excerptCPU frequency scaling is implemented in Linux kernel, the infrastructure is called cpufreq. Since kernel 3.4 the necessary modules are loaded automatically and the recommended ondemand governor is enabled by default. However, userspace tools like cpupower, acpid, Laptop Mode Tools, or GUI tools provided for your desktop environment, may still be used for advanced configuration.Source: ArchLinux Wiki - CPU Frequency Scaling What governors are available? Governor Description -------- ----------- ondemand Dynamically switch between CPU(s) available if at 95% cpu load performance Run the cpu at max frequency conservative Dynamically switch between CPU(s) available if at 75% load powersave Run the cpu at the minimum frequency userspace Run the cpu at user specified frequenciesReferencesCPU frequency scaling in Linux with cpufreq CPU Frequency Scaling
I just installed conky and recognized that my CPU-Frequency changes every second in steps of 10 to 100MHz and between 1000MHz - 2000MHz. Just wondered if this is normal behavior?
CPU Frequency changing every second
That won't work. The number of clock cycles each instruction takes to execute ( they take quite a few, not just one ) depends heavily on the exact mix of instructions that surround it, and varies by exact cpu model. You also have interrupts coming in and the kernel and other tasks having instructions executed mixed in with yours. On top of that, the frequency changes dynamically in response to load and temperature. Modern CPUs have model specific registers that count the exact number of clock cycles. You can read this, and using a high resolution timer, read it again a fixed period later, and compare the two to find out what the (average) frequency was over that period.
My PC has two processors, and I know that each one runs at 1.86 GHz. I want to measure the clock pulse of my PC processor/s manually, and my idea is just to compute the quotient between the number of assembler lines a program have, and the time my computer spends to execute it, so then I have the number of assembly instructions per time processed by the CPU (this is what I understood a 'clock cycle' is). I thought to do it in the following way:I write a C program and I convert it into assembly code. I do: $gcc -S my_program.c , which tells to gcc compiler to do the whole compiling process except the last step: to transform my_program.c into a binary object. Thus, I have a file named my_program.s that contains the source of my C program translated into assembler code. I count the lines my program have (let's call this number N). I did: $ nl -l my_program.s | tail -n 1 and I obtained the following: 1000015 .section .note.GNU-stack,"",@progbitsIt is to say, the program has a million of lines of code. I do: $gcc my_program.c so that I can execute it. I do: $time ./a.out ("a.out" is the name of the binary object of my_program.c) for obtaining the time (let's call it T) it is spent for running the program and I obtain: real 0m0.059s user 0m0.000s sys 0m0.004sIt is supposed that the time T I'm searching for is the first one represented in the list: the "real", because the other ones refer on other resources that are running in my system at the same right moment I execute ./a.out. So I have that N=1000015 lines and T=0.059 seconds. If I perform N/T division I obtain that the frequency is near to 17 MHz, which is obviously not correct. Then I thought that maybe the fact that there are other programs running on my computer and consuming hardware resources (without going any further, the operating system itself) makes that the processor "splits" its "processing power" and it does the clock pulse goes slower, but I'm not sure. But I thought that if this is right, I should also find the percentage of CPU resources (or memory) my program consumes, because then I could really aspire to obtain a (well) approximated result about my real CPU speed. And this leads me to the issue of how to find out that 'resource consumption value' of my program. I thought about the $ top command, but it's immediately discarded due to the short time my program spends to be executed (0.059 seconds); it's not possible to distinguish by simple sight any peak on the memory usage during this little time. So what do you think about this? Or what do you recommend me to do? I know there are programs that do this work I try to do, but I prefer to do it by using the raw bash because I'm interested on doing it through the most "universal way" possible (seems like more reliable).
How to measure the clock pulse of my computer manually?
Turns out I had to install thermald from the Debian repository. Once I did, everything started to work just fine. No more throttling down to 400MHz. Interestingly, I never had to install this package myself in the past and all my past laptops never had this issue. I guess it used to be installed automatically as a dependency of some other package but I never noticed it. Thanks!
I just bought a new laptop (Asus UX435EG-AI084R) equipped with an Intel Tiger Lake CPU (i7-1165G7; 2.80GHz) and I installed Debian Linux (Testing) on it. Unfortunately I am experiencing a very strange issue related to CPU frequency scaling. Every time the CPU is under heavy load, the frequency goes down to 400MHz. The moment the CPU stops being under load and the system is (almost) idle, the frequency goes up to ~1.2GHz. The CPU never runs hot. The system, for some reason, instead of increasing the frequency when needed, it decreases it! Interestingly, this behaviour is not present under Windows 10 Pro (dual boot). I have tried the following without managing to change the above strange behaviour:I disabled the intel_pstate (intel_pstate=disable kernel parameter) Forced the 'performance' policy instead of 'powersave' Set the minimum frequency to 3GHz. Yes, even that doesn't work! The CPU will still go down to 400MHz at full load and will return to 3GHz on idle. I tried to uninstall tlp. I double-checked the temperature. It never goes above 60C under load. I tried every single possible solution I found online and nothing works. I disabled Turbo Boost There are no CPU-related options in the BIOS to fiddle. Disconnecting the laptop from the charger didn't make any difference.Since the problem only appears on Linux, I started to suspect that this is a kernel/firmware/driver bug. So what I did was to live boot Intel Clear Linux and do the following:Run four simple while True: pass Python scripts in order to occupy all 4 cores; lscpu returned ~400MHz. Kill one of the running scripts to occupy 3 cores; lscpu returned ~800MHz. Kill one of the running scripts to occupy 2 cores; lscpu returned ~1200MHz. Kill one of the running scripts to occupy 1 cores; lscpu returned ~1800MHz.Currently, Debian Linux (Testing) uses kernel 5.9.0-5, while Intel Clear Linux uses 5.9.12. My other laptop, with a 3-year-old Intel CPU and the same Debian Linux version, does not have this problem.Is it reasonable to assume that we are looking at a software/kernel/driver bug? While waiting for kernel 5.10 to land on Debian Testing, is there a way to completely disable any kind of frequency scaling?Thank you EDIT 1: Turns out the 5.10 kernel did not fix anything. The same problem exists on Ubuntu 20.10. I guess I need to figure out what Intel Clear Linux does differently and try to configure Debian accordingly. Any advice is more than welcome.
How to completely disable CPU frequency scaling? Problems with kernel 5.9 running on Intel Tiger Lake CPU laptop
This is related to a new driver introduced in Fedora 20 that does not need more than those two governors. See this thread CPU Governors - where is ONDEMAND? for details. To have the missing governors, you should boot with the kernel parameter intel_pstate=disable. To do so, in the GRUB boot screen, choose "edit boot commandline" and add this to the line which starts with kernel. You can also add it permanently to the grub config file. Note that normally you should not need others governors than those proposed by the new driver which does its job perfectly.
In Ubuntu 13.10 on my (Dual Core i5 Lenovo G570) laptop, I recently discovered the wonders of indicator-cpufreq, so I can extend my battery life dramatically by setting it to 'ondemand' or 'powersave' governor - here is the menu it shows:I was wondering whether I could implement this in the other half other my dual boot on my laptop, Fedora 20. However, after looking at this documentation, and installing the kernel-tools package, when I run the command to list the available modes. On Fedora I get: wilf@whm1:~$ cpupower frequency-info --governors analyzing CPU 0: powersave performanceOn Ubuntu I get: wilf@whm2:~$ cpupower frequency-info --governors analyzing CPU 0: conservative ondemand userspace powersave performanceSo can I get the conservative, ondemand, & userspace modes in Fedora? Mainly the ondemand oneFedora System Info KernelLinux whm1 3.12.10-300.fc20.i686+PAE #1 SMP Thu Feb 6 22:31:13 UTC 2014 i686 i686 i386 GNU/LinuxVersion Fedora release 20 (Heisenbug) Kernel 3.12.10-300.fc20.i686+PAE on an i686/proc/cpuinfo, relevant /etc/default/grub (Fedora manages Grub, not Ubuntu): #GRUB_CMDLINE_LINUX="acpi_osi=Linux acpi_backlight=vendor pcie_aspm=force" GRUB_CMDLINE_LINUX="vconsole.font=latarcyrheb-sun16 $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) rhgb quiet acpi_osi=Linux acpi_backlight=vendor pcie_aspm=force"Ubuntu System Info KernelLinux whm2 3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:25:07 UTC 2014 i686 i686 i686 GNU/Linux/proc/cpuinfo, relevant /etc/default/grub (I think is loaded by Fedora Grub): GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=""
How to get ondemand governor on fedora
This can be done with the cpufreq-set command from cpufrequtils. Here is an example: cpufreq-set -f 1700
What command or program do I need to run in order to change the clock speed to a specific value I want? For example, I'm looking for something like <command name> 1.7, or something similar, to change the speed to a specified frequency in GHz or MHz. I would prefer a terminal application, but a desktop app would also suffice. I am using Ubuntu 12.04 LTS.
Ubuntu- How do I change clock speed from terminal? [duplicate]
Use the command: lscpuTo know all your CPU Specs: To get the specific frequency of your CPU use the command with a grep like so: lscpu | grep MHzIt will give you output like: CPU MHz: 2723.789To see realtime CPU speeds fluctuation use : watch -n1 "lscpu | grep MHz | awk '{print $1}'";https://askubuntu.com/questions/916382/ubuntu-get-actual-current-cpu-clock-speed
I have a Linux PC with a 3.4 GHz CPU. I must check this processor to see if it actually runs at that speed. Is there a benchmark available? I ran sysbench but it only provides time of completion, and I must find the maximum (actual) clock rate.
Linux utility to bench mark clock speed of CPU
Use the command cpupower --cpu all frequency-info | grep "current CPU" to see what the frequency the cores are running at. Use the command cpupower --cpu all frequency-set --max 1.4GHz to set the CPU frequency to 1.4GHz
I have a first generation i7 computer and it is prone to overheating. How can I set the CPU clock frequency in Fedora?
How to manually set CPU clock frequency in Fedora?
You can't set the affinity for all invocations of an executable. The affinity is managed by the kernel and inherited from parent process to child process, there's no mechanism that changes the affinity of a process when an executable is executed. If you want all invocations of gzip to run on CPU 1, put a wrapper script called gzip ahead of the real one in the PATH, e.g. ~/bin/gzip: #!/bin/sh exec taskset 1 /bin/gzip "$@"But this strikes me as completely useless. Explicitly setting a process's affinity usually makes things slower. It can sometimes be useful to confine a CPU-intensive task to certain processors and leave the system more reactive, though nice usually does a better job overall. But doing that indiscriminately for all the invocations of an executable sounds like an XY problem.
How can I set CPU affinity for the specific program (say gzip) to always run on specific core or cores (core 1, for example)? I read about taskset, but can it be used before program is actually used and creates a process?
Set CPU affinity for the specific program?
The cpupower.service unit provided with Archlinux reads its settings from /etc/default/cpupower. Either uncomment the governor setting, or add a new line so that governor='powersave'.
I try to change governor from "ondemand" to "powersave" like wiki says ( https://wiki.archlinux.org/index.php/CPU_Frequency_Scaling#Scaling_governors ), but everyway I want to change it, Arch always boot with "ondemand" sudo cpupower frequency-set -g powersaveby the way this works
Powersave governor as default Arch
Alright , this was said to be a bug with Thinkpad , When your battery is unplugged , and connected to AC power higher than 65W , the freq will stuck at lowest , check the /sys/devices/system/cpu/cpuX/cpufreq/bios_limit , to see if it's stuck. Source: http://www.thinkwiki.org/wiki/Problem_with_CPU_frequency_scaling I got it solved by passing kernel parameter: processor.ignore_ppc=1
acpi-cpufreq was already loaded , where the frequency was all 800MHz now , and I can't make it back to full-speed , which is 2500MHz. I tried cpufreq-set -g performance , then I check /proc/cpuinfo , it was still 800MHz Is there anything wrong ? UPDATE It's a i5 CPU , part of cpu information provided below: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz stepping : 7 microcode : 0x25 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 4986.76 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:
cpufreq-set -g performance doesn't work on arch linux
Add this command to rc.local or create a systemd unit - whatever you like. Instead of disabling Turbo you might want to limit the maximum operating frequency of your CPU. There's a gulf between base and turbo frequencies, so disabling Turbo feels like an overkill. I have a script for that as well. With the intel-pstate driver you're free to set any maximum CPU operating frequency.
Through the magic of piezoelectric phenomena, I experience "coil whine" when moving the mouse. Turns out said coil is energized by the CPU, and that the Intel driver enabling Turbo Boost makes it process my mouse movements extremely quickly, resulting in audible power consumption spikes. When I disable it with the following command, I get back my sanity: echo "1" | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo But unfortunately, it only lasts until the next reboot. Is there a way to persistently disable Turbo Boost? Perhaps via some incantation involving x86_energy_perf_policy or cpuinfo? In case it's relevant, my particular CPU model is i9-10900.
Persistently disable Intel Turbo Boost
It's quite likely that if you are using the second generation of EPYC CPU (Rome), not all function are implemented on your kernel. I don't know which distro you are using (it could be backported), but according to this blogpost on Ubuntu https://ubuntu.com/blog/amd-epyc-rome-support-in-ubuntu-server your kernel might not fully support your CPU. From the Ubuntu site: Support for AMD EPYC Rome has been merged to the Linux kernel starting with 5.4 series. Therefore, all Ubuntu releases with 5.4 kernel installed support this CPU and all its new features. However, Canonical has also backported basic support for AMD EPYC Rome to older LTS releases to ensure they will work properly on this new CPU.
I have a server HPE ProLiant system with AMD Epyc CPU, BIOS A43 v1.20, with Linux kernel 4.19.71 (as well I tried on 5.4.0). Now, I'm trying to set CPU performance governor: # cpupower frequency-set -g performance Setting cpu: 0 Error setting new values. Common errors: - Do you have proper administration rights? (super-user?) - Is the governor you requested available and modprobed? - Trying to set an invalid policy? - Trying to set a specific frequency, but userspace governor is not available, for example because of hardware which cannot be set to a specific frequency or because the userspace governor isn't loaded? #So I begin troubleshooting: # cpupower frequency-info analyzing CPU 0: no or unknown cpufreq driver is active on this CPU CPUs which run at the same hardware frequency: Not Available CPUs which need to have their frequency coordinated by software: Not Available maximum transition latency: Cannot determine or is not supported. Not Available available cpufreq governors: Not Available Unable to determine current policy current CPU frequency: Unable to call hardware current CPU frequency: Unable to call to kernel boost state support: Supported: yes Active: yes Boost States: 0 Total States: 3 Pstate-P0: 2000MHz Pstate-P1: 1800MHz Pstate-P2: 1500MHz # # ls /sys/devices/system/cpu/cpufreq/ <Empty> # #So, for whatever reason, it thinks that cpufreq drivers are missing. However, the kernel .config has the following enabled: CONFIG_CPU_FREQ=y CONFIG_X86_ACPI_CPUFREQ=y CONFIG_X86_ACPI_CPUFREQ_CPB=y CONFIG_X86_INTEL_PSTATE=y CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_ONDEMAND=yI did try to disable CONFIG_X86_INTEL_PSTATE and add CONFIG_X86_POWERNOW_K8 (the only AMD-specific in the current kernel), but it didn't seem to help. I would be happy for any advice.
linux: can't set cpu frequency governor with cpupower
There are still a couple of options left, please refer to Arch wikipage. The one you are looking for, specifically, is thermald.
I run Gentoo Linux on my laptop. I have an issue, though, where if I'm building some very large piece of software (as I do fairly frequently, since the purpose of this laptop is development), the CPU tends to heat up more than I'd like. I used to use cpufreqd to manage this, since it has an lm_sensors plugin and can reduce the CPU frequency once it reaches a particular temperature threshold. However, this is no longer going to be a good option, since (apparently) cpufreqd is no longer actively maintained, and as such is going to be removed from Gentoo's package tree. Because of this, my question is: is there some other way I can solve this problem? I am aware of other similar CPU frequency management daemons, as well as the drivers that are built into the Linux kernel, but as far as I know they do not manage CPU frequency as a function of CPU temperature.
Slow down CPU when it heats up
That's what cpulimit is for: cpulimit --exe=gzip --background --limit=100 cpulimit --exe=tar --background --limit=100this will limit the total CPU usage of the most CPU-resource intensive programs used by the backup2l script to 100% per core. If that would still make too much noise, reduce that number until your machine is quiet again. After backup2l is finished, just killall cpulimit to go back to normal operations. Note: your backup might take twice as long if you limit it to only 2 cores just like a car: the faster, the noisier...
When the CPU (Intel i5-8400) is heavily loaded, the fan seems to increase its speed and make noise. I want to eliminate the noise when running CPU-intensive backup process (backup2l program). (It is apparently CPU-intensive because of compressing backup with gzip.) How to make a process not to use turbo boost? My OS is Ubuntu Linux 18.10. If such a feature is not available in Linux, we should report a feature suggestion.
Turn off CPU turbo-boost for a process
From the conky man page. cpu (cpuN)CPU usage in percents. For SMP machines, the CPU number can be provided as an argument. ${cpu cpu0} is the total usage, and ${cpu cpuX} (X >= 1) are individual CPUs.freq_g (n)Returns CPU #n's frequency in GHz. CPUs are counted from 1. If omitted, the parameter defaults to 1.You most likely have something like SpeedStep enabled which is acting like a governor on a car, regulating the speed of the cores inside your CPU. You can confirm that this is going on by looking at the output of this command: % less /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 37 model name : Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz stepping : 5 cpu MHz : 1199.000 ...The 2 numbers that matter are the 2.67GHz, that the GHz that my CPU is rated to operate at followed by the number 1199.00, this is what my CPU is allowed to run at by the governor setup on my Linux laptop. You can see what governor is currently configured like so: # available governors % sudo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors powersave ondemand userspace performance # which one am I using? % sudo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor powersave# what's my current frequency scaling? % sudo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq 1199000# what maximum is available? % sudo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq 2667000# what's the minimum? % sudo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq 1199000# what scaling frequencies can my CPU support? % sudo cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies 2667000 2666000 2533000 2399000 2266000 2133000 1999000 1866000 1733000 1599000 1466000 1333000 1199000 You can override your governor by doing the following, using one of the governor's listed above: % sudo sh -c "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor"ReferencesUsing CPU Frequency on Linux
I'm setting up conky and I would like to add the CPU frequency, but if I put ${freq_g cpu0} GhzI get 1.2Ghz. Why is that? My CPU is 2.8Ghz.
How does CPU frequency work in conky?
Basics With an AMD APU, the following factors are of interest.If your system will only use console output, you will want to use the radeon driver for it. An active console handled by the radeon driver will save 8 W as compared to both the fallback driver and fglrx on an A10-6700 (Richland). If the console is idle, radeon will save an additional 2 W. The AMD APU's Turbo Core feature must be enabled one way or the other. Contrary to what some people believe, Turbo Core (which, once enabled, is an autonomous APU feature) works excellently on APU level and has reasonable use cases. But either the radeon or the fglrx driver must be present and in both cases, care must be taken.If you feel that it's worth reducing the idle power consumption of a complete A10-6700 Richland system (including fans, DDR3-1866 RAM and a single SSD) from 40 W to 30 W, or if you want to have full Turbo Core speed for longer (the available Turbo Core frequency is based on total chip power consumption including graphics, as well as temperature) then you should care. All above statements relate to my former analysis How to set up Linux for full AMD APU power management support: Turbo Core, Cool'n'Quiet, Dynamic Power Management? Be aware that you cannot necessarily rely on information from /proc and /sys regarding the actual frequencies of your AMD APU's cores. To be on the safe side, use cpufreq-aperf (or either cpupower frequency-info or cpupower monitor) after modprobe msr. Sadly, up to and including Debian Jessie 8.2, you will not get everything out of an AMD APU with a Debian installation out of the box. Debian Option 1: The fglrx Diver AMD's Linux fglrx driver enables Turbo Core, which should not come as a surpirse. However, while the APUs show a very reasonable and useful Turbo Core behaviour with the free Linux radeon driver, this is not necessarily the case when you use fglrx. The fglrx driver appears to manipulate the core affinity of processes. Maybe it even replaces microcode in the APU. But regardless of how fglrx works on detail level, you'll get this:On an APU with four cores, if you have two processes requiring full performance, Linux will start them on the two separate APU modules. (E.g. the A10-6700 has two modules with two cores each, and while each core has a small individual L3 cache, both cores on the same module share the a single L2 cache.) This initial setup will give you maximum performance and also high power consumtion. (The A10-6700 power consumption will increase by 70 W in this example.) But fglrx will relocate one of the porcesses to the second core of the other module if the APU gets too hot. (This will reduce A10-6700 power consumption by 25 W as compared to using both APU modules.) It presumably does this to demonstrate a high core frequency for a longer time. However, the migrated process is very likely to effectively stop for a moment due to the L2 cache not having any of the required data at hand, and usually, performance after the migration will be lower due to the shared L2 cache. Since fglrx does not take any proactive measures to slightly reduce core frequencies before the APU's limits are reached, it consequently reduces core speeds drastically if the processes run for a longer time. This means that after the burst, you'll have to live with lower frequencies until your APU has cooled down.Other than for core frequency demonstration, the behaviour of fglrx is somewhat questionable in my view; I believe you'll get a better overall performance with radeon. But if you want a 2D/3D system and find this abrupt frequency scaling behaviour acceptable, you can choose fglrx up to and including Debian 7. You can additionally choose fglrx for Debian 8 if you don't intend to use the GNOME desktop. Debian Option 2: The radeon Driver As mentioned, the radeon driver offers a much lower console mode power consumption and a much smoother Turbo Core experience. The price you'll have to pay is its worse 3D support.Debian 6 (Squeeze): Even with the Linux 3.2 available as a backport, the radeon driver will not handle your APU's Turbo Core feature. Debian 7 (Wheezy): Linux 3.16 is available as a backport. Upgrade to Linux 3.16 (if no other requirements prevent it) and see below. Debian 8 (Jessie): Based on Linux 3.16. See below.The flag responsible for Turbo Core handling is called bapm; it is located in the trinity_dpm.c file of the radeon driver. Before Linux 3.16, it was always disabled due to stability issues with some configurations. As of Linux 3.16, two changes were planned:The value for bapm can be provided as a module parameter (see here). The value of bapm is set to 1 by default for Kaveri, Kabini and desktop Trinity, Richland systems (see here), resulting in Turbo Core being enabled.This means that with current Linux kernels, you'll most likely get best value (Turbo Core, energy efficient console) out of the box (this is e.g. true for current ArchLinux installers). Much to my surprise, even with the Debian Jessie 8.2 installer, you need to take care of two things:Following its policies, Debian will not provide the required microcode by default. You'll want to provide it: Append non-free and contrib to the relevant entries in /etc/apt/sources.list and run sudo apt-get update as well as sudo apt-get install firmware-linux-nonfree Interestingly, despite the 3.16 defaults, Turbo Core does not work on the same system where a recent ArchLinux succeeded. It could be that the Debian 3.16 kernel is too old to incorporate the defaults, or that the Debian patch to isolate the microcode has some side effect. Either way, Turbo Core can be enabled for Debian Jessie in the presence of the microcode by providing a bapm value at boot time: Append radeon.bapm=1 to the value of GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and run sudo update-grubAfter these two changes, Turbo Core and the low console power consumption worked for me like a charm.
I plan to set up an AMD APU system using Debian, but I cannot find hints how to achieve maximum energy efficiency and successful use of Turbo Core. In particular, the Debian Wiki HowTo on CPU Frequency Scaling is not very helpful for AMD systems. How do I have to set up my Debian system to deploy my AMD APU's features?
How to set up a Debian system (focus on 2D or console/server) with an AMD Turbo Core APU for maximum energy and computing efficiency?
It’s the slowest speed at which the processor can run, e.g. if the CPU doesn’t have much to do (depending on the governor in use) but can’t go to sleep, or if it’s throttled (from overheating, typically). You’ll see the current speed reported by lscpu vary between the minimum and maximum values.
There are CPU MHz, CPU max MHz and CPU min MHz in lscpu output. What are they mean? Especially the CPU min MHz? We can think CPU max MHz is maximum CPU frequency and CPU MHz as current usage. Why there is a minimum? From lscpu man page: Minimum megahertz value for the CPU. This explanation is not clear to me. What does the CPU min MHz actually mean?
What is CPU min MHz in lscpu output?
The issue was the intel_pstate driver. I switched to the original ACPI driver via boot kernel parameters. Specifically, in /etc/default/grub, I changed the DEFAULT boot line to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_pstate=disable acpi=force" (remember to update-grub after). Now, even with no changes at all (i.e. default "ondemand"): MULTITHREAD (8 threads) BAT + ondemand: 38.5 (37.5 ~ 40.0) BAT + performance: 31.8 (30.1 ~ 35.0) *11* I see some very small spikes to 35 once every few seconds, but it is within reason... Ironically, power consumption during normal workload (browsing, EMACS, on wifi etc.) actually also got BETTER using ACPI driver than intel_pstate (average 590 mA vs 660 mA). A happy (but worrisome) side effect. EDIT: one downside is that it seems that suspend (sleep mode) consumes more power when not using the intel_pstate driver. About 10% every 12 hours...
EDIT: Ubuntu (mate) 20.04, intel_pstate driver. Computer is I am using a razer blade stealth ultrabook (early 2019), with intel core i7 i7-8565U. I am encountering odd behavior (extreme slowdown) while on battery power only, even when I have set TLP to AC mode. The problem is made much worse if I set cpufrequtils to performance mode (espeically if I multithread)! We'll start with the single threaded case (i.e. only main thread). I am running a cascade of OPENCV filters (Gaussian blurs etc.) on video frames from file or from webcam. It does not matter if I load all the frames into memory first (i.e. it's not disk or device I/O problem). Below are listed the processing time for a single loop (one frame). This is not complex code. Basically, it is doing: Filter filters[400] while( cap.read(frame) ) { for( int i=0; i<400; ++i ) { filters[i].dofilter(frame); } }where filters[i].dofilter is just call to e.g. cv::GaussianBlur, resize(), etc, with the destination cv::Mat pre-allocated (I am not doing any additional allocations) This is using CPU only (i.e. it's not using OPENCV transparent openCL or anything). SINGLE THREAD AC + powersave: 71 msec (variance 70.5-71.5) AC + performance: 67 msec (variance 66.5-67.5) BAT + powersave: 95 msec (variance 84.0-115.0) *1 BAT + performance: 104 msec (variance 76.0-202.0) *21* Note: spikes to 110+ about every 5 sec 2* Note: most ~96, with few spikes low to 80s and high to 120sMethod: 10 runs of each condition for 60 seconds (about 600 frames each times 10 runs = 6000), randomly ordered (so that heat, battery voltage, etc. does not confound). I use the same input frame for every loop (in other words, it is not due to different image content that it is processing each time). It is literally processing the exact same input every time step. I can see the per-frame processing times change immediately if I unplug or plug in the AC adapter or set powersave/performance using cpufrequtils. I am at a complete loss. I am using a razer blade stealth ultrabook, with intel core i7 i7-8565U. Ubuntu (mate) 20.04, intel_pstate driver. So, I have 3 specific questions: 1) What the hell is happening? 2) How to set TLP (kernel params?) to force it to behave as if on AC (surely the battery can provide enough to run a cpu/memory bound single-core program as fast as it does when on AC)? It's not even doing that much! 3) Is there any secret/weird settings that happen on battery power. Especially relating to multithreading? The problem is highly parallelizable -- there are basically 8 independent chains of filters that I can run in parallel. Usually I do this. When I do this on AC it goes like this: MULTITHREAD (8 threads) AC + powersave: 28.6 msec (variance 26.8-31.1) AC + performance: 28.8 msec (variance 26.6-31.2) BAT + powersave: 39 msec (variance 36.0-64.0) *3 BAT + performance: 176 msec (variance 39.0-202.0) *43* Note: this is very tame compared to if I run with webcam -- then it spikes heavily between 40 and 904* Note: will update at 40 msec for a few frames, then go to 180 msec for a long time, then burst at 40 for a few.The software is multithreaded via a thread pool. I have checked the locking, and no time is spent waiting for locks even in the extreme multithreaded case (this is actually where I spent the most time because I thought it was the issue originally...). I get similar results with 2~8 threads. Gets slower on battery with more threads (especially in performance mode), and faster on AC with more threads. EDIT: problem happens even if I disable TLP. I have not tried switching to old acpi frequency governer yet (think that would work?) EDIT 2: When in single thread mode, htop shows only a single CPU core pegged (i.e. it is not using openmp or something to vectorize and use more cores).
Horrible performance on battery with intel_pstate driver
cpufreq-info - Utility to retrieve cpufreq kernel information. It will list available frequency steps, available governors, current policy etc. cpufreq-set - A tool which allows to modify cpufreq settings (try e.g. cpufreq-set -g performance or cpufreq-set -f 2 GHz once you know what frequencies your CPU can be set to)You can also retrieve information about you cpufreq state directly from /sys/devices/system/cpu/cpu*/cpufreq directory. For example available frequencies are stored in /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies.
I am using cpufreq to scale my CPU frequency. But I do that by clicking cpufreq icon on the panel of Ubuntu 12.04. If without a mouse, how can I show and scale CPU frequency by running commands in terminal?
Scale cpu frequency in CLI?
It looks as though there isn't any. The FreeBSD forums folks have weighed in, and there's no support for CPU frequency management on this chip. AMD's spec sheet makes no mention of PowerNow, either. This isn't surprising - small-form-factor boxes (like the T5730w; is that what you have?) are often built to run cool and low-power anyway. The Sempron 2100+ is 9W and built to be fanless, which fits that model. If you're actually having heat issues, you might be able to replace a chassis fan or otherwise improve air flow, but I think the CPU's not going to cooperate.
I have FreeBSD 9.0-RC3 installed on an HP Thin Client that has a Mobile AMD Sempron(tm) Processor 2100+. I am trying to get cpufreq and powerd to manage the frequency of the processor, but I'm not having any luck. Here's some lines from dmesg: FreeBSD 9.0-RC3 #0: Sun Dec 4 08:56:36 UTC 2011 [emailprotected]:/usr/obj/usr/src/sys/GENERIC amd64 CPU: Mobile AMD Sempron(tm) Processor 2100+ (997.52-MHz K8-class CPU) Origin = "AuthenticAMD" Id = 0x60fc2 Family = f Model = 6c Stepping = 2 Features=0x78bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2> Features2=0x2001<SSE3,CX16> AMD Features=0xea500800<SYSCALL,NX,MMX+,FFXSR,RDTSCP,LM,3DNow!+,3DNow!> AMD Features2=0x119<LAHF,ExtAPIC,CR8,Prefetch>module_register: module cpu/ichss already exists! Module cpu/ichss failed to register: 17 module_register: module cpu/est already exists! Module cpu/est failed to register: 17 module_register: module cpu/hwpstate already exists! Module cpu/hwpstate failed to register: 17 module_register: module cpu/p4tcc already exists! Module cpu/p4tcc failed to register: 17 module_register: module cpu/powernow already exists! Module cpu/powernow failed to register: 17powerd: lookup freq: No such file or directoryAlso, sysctl dev | grep freq returns no hits. Any suggestions on how to get CPU frequency management working?
FreeBSD CPU frequency scaling on AMD Sempron 2100
Short answer The reason the path won't show on your system is that the cpufreq driver isn't loaded. This driver is the one creating the /sys/devices/system/cpu/cpuY/cpufreq in sysfs and populating it with values. When trying to compile the kernel without CONFIG_X86_INTEL_PSTATE, it is forced to enabled by the pcc_freq and acpi_freq driver compilation prerequisites, so I guess you must set it in order to compile the driver. More details Looking at the kernel code under drivers/cpufreq/, we can see that the scaling_max_freq entry in sysfs is defined and maintained by cpufreq.c. There are two drivers implementing the cpufreq functionality - pcc_cpufreq and acpi_cpufreq. In order for the path to be initialized one of the cpufreq drivers must be loaded. Relevant fields in kernel config: # # CPU frequency scaling drivers # CONFIG_X86_INTEL_PSTATE=y CONFIG_X86_PCC_CPUFREQ=m CONFIG_X86_ACPI_CPUFREQ=mCheck your system for pcc_cpufreq driver. If it is available you should use the cpufreq path without loading acpi_cpufreq, but since you said CONFIG_X86_INTEL_PSTATE isn't set in your kernel config file, you might be missing all of the cpufreq drivers. Hope this helps.
I checked my Linux sys file, I don’t have: /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freqMy kernel config doesn’t have CONFIG_X86_INTEL_PSTATE, and it still didn’t use acpi-cpufreq driver. The sys file here only created when intel_pstate enabled? I am using yocto environment, not CentOS or Ubuntu.
There is no scaling_max_freq sys file
It should be pretty obvious: time spent executing a task / total time. So over a given interval, 10% load means 10% of that time was spent executing tasks, and 90% was idle.
What does it mean for a core to run at different loads in different moments? How does a 10% load differ from a 90% load? How is this number calculated, essentially?
Linux and CPU usage
Thanks to Artem S. Tashkinov I was able to resolve the problem. He wrote in a commentTry using acpi-cpufreq intead of intel_pstate : https://silvae86.github.io/2020/06/13/switching-to-acpi-powerand that did the trick. I had to install the acpi_cpufreq driver (in my case it was already installed) via apt-get apt-get install acpi-support acpid acpiand edit /etc/default/grub by adding intel_pstate=disable to GRUB_CMDLINE_LINUX_DEFAULT, like so (in my case): GRUB_CMDLINE_LINUX_DEFAULT="quiet nosplash debug intel_pstate=disable"Then run update-grub and restart the machine. Consequently the intel_pstate driver is not used anymore and acpi_cpufreq is used in its stead. Apart from having many more governors to chose from (you can read here about them) and most of all I can now turn on the ondemand governor and set a minimum and maximum frequency. And it is kept that way. In my case I used cpupower frequency-set -g ondemand -d 1.2GHz -u 2.0GHzand so far the machine hasn't gotten too hot. [edit] After still experiencing temperature problems I figured out what the problem is. It's the GPU frequency and the fact that this problem still occurred when using 3D applications. In /sys/kernel/debug/dri/0/i915_ring_freq_table you can see the max frequency the GPU can run at corresponding to a max CPU frequency. After setting the CPU frequency the GPU frequency has to be adjusted as well in /sys/class/drm/card0/gt_max_freq_mhz and in /sys/class/drm/card0/gt_boost_freq_mhz.
I am using Debian 10 (buster), 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux on my Lenovo T530 laptop which has a i7-3630QM. It has been many years since I bothered with frequency scaling and temperature but as of late it has become a bit of a problem. When I try to run CPU intensive applications the CPU can get very hot for longer periods of time, I am talking above 100°C. Now this laptop is more than eight years old and I'm not sure how much longer it's going to last so I want toHave it last as long as possible and Reduce the noise pollution since the fan gets rather loud at high loads and temperatures.Years ago there were several governors that could be used to do this so I was surprised that now there is only powersave and performance. I tried to use cpupower to set a maximum frequency of e.g. 2.4GHz: cpufreq-set -u 2.4GHz. Unfortunately that does not seem to really work. cpupower frequency-info reports current policy: frequency should be within 1.20 GHz and 2.40 GHz. The governor "powersave" may decide which speed to use within this range.but when I start a CPU intensive applications (especially one that is in need of 3D graphics acceleration), in my case the game "Stellaris", the CPU gets really hot nevertheless, which is not surprising since cpupower frequency-info reports hardware limits: 1.20 GHz - 3.40 GHz available cpufreq governors: performance powersave current policy: frequency should be within 1.20 GHz and 2.40 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency: Unable to call hardware current CPU frequency: 3.30 GHz (asserted by call to kernel)which actually contradicts itself. Now this issue has been discussed e.g. in this issue and the author resolves it in his answer. However, some people have commented that this does not work. In my case it does not work either. Another poster in this issue writes in his answer that this does not work either and actually links to a kernel document where it states that it is actually not possible to reliably set the frequency. The fans of my laptop have been cleaned recently and new thermal paste has been applied but as I stated this machine is old and gets hot nevertheless. A different poster suggested setting the maximum load to a lower percentage, which actually works but does not rectify my problem since the CPU gets really hot nevertheless. Does anyone know of a different, reliable way to limit the CPU frequency? In the aforementioned example, Stellaris, I think I noticed that the CPU tried to limit the frequency to the set value of 2.4GHz while the game was loading and while I was in the main menu. Once I loaded a save file and was in the game itself it spiked to its maximum frequency. Is it possible that the application itself overrides these values? Does anyone know of any other way to throttle the CPU from reaching such high temperatures? E.g. is it possible to set a lower temperature threshold for when the CPU starts throttling itself?
Why does my CPU disregard the maximum frequency set by e.g. cpupower and how can I keep my CPU from getting too hot?
The values are given in kHz (see the documentation). So 60000 is 60 MHz, 396000 is 396 MHz.
I was trying to get frequecy from my device at run time. I used this device node sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq It gave me 60000. I tried this one cat /sys/devices/system/cpu/cpu*/cpufreq/cpuinfo_cur_freqI get 396000. What is the unit of these values. Is it 60000 Hz or Mhz?
What is the unit of this value
In order to set a specific frequency, the 'userspace' governor is required in cpufreq. Please see the '2.3 Userspace' section in this documentation
I have run: sudo parallel cpufreq-set -c {} -u 2000MHz ::: 0 1 2 3yet when I stress the CPU, cpufreq-info returns: analyzing CPU 2: driver: intel_pstate CPUs which run at the same hardware frequency: 2 CPUs which need to have their frequency coordinated by software: 2 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 2.90 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 800 MHz and 2.00 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency is 2.29 GHz.2.29 GHz is higher than 2.00 GHz. And this behaviour is seen for all cores. It is as if it ignores the frequency limit completely. Why does that happen? Is there a way to avoid it happening? $ uname -a Linux travel 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux$ cat /proc/cpuinfo processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz stepping : 7 microcode : 0x2f cpu MHz : 1792.712 cache size : 3072 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit bogomips : 4585.01 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:
Why does CPU governor not respect max CPU speed limit?
For current AMD CPUs, the appropriate driver is the ACPI driver. K6, K7, and K8 CPUs have specific drivers, but K10 and later are handled by the “generic” ACPI driver. There is additional support for frequency sensitivity feedback on Jaguar/Puma and later low-power CPUs, available in the amd-freq-sensitivity module. The kernel contains documentation on its CPU performance scaling features.
I know we have acpi_cpufreq driver and intel_psteate driver to use. However, I think intel_pstate driver is for Intel. What's the CPU frequency driver for AMD? Any introduction ( ex: governors ) for AMD CPU freq driver?
What is the CPU frequency driver for AMD?
I think you want to have cpu Usage string to be shown on the same line each time and to be followed by the cpu Idle line immediately. To achieve that you can use tput and el (clr_eol) terminal capability to remove line and cuu (parm_up_cursor) to move n lines up. You can read about terminal capabilities in man terminfo. Your script would look like this: #!/bin/bashPREV_CPU_USE=0 PREV_CPU_IDLE=0 PREV_EPOCH_TIME=0# Setting the delimiter IFS=$'\n' counter=0while true; do # Getting the total CPU usage CPU_USAGE=$(head -n 1 /proc/stat) # Getting the Linux Epoch time in seconds EPOCH_TIME=$(date +%s) # Splitting the /proc/stat output IFS=" " read -ra USAGE_ARRAY <<< "$CPU_USAGE" # Calculating the used CPU time, CPU idle time and CPU total time CPU_USE=$((USAGE_ARRAY[1] + USAGE_ARRAY[2] + USAGE_ARRAY[3] + USAGE_ARRAY[6] + USAGE_ARRAY[7] + USAGE_ARRAY[8] )) CPU_IDLE=$((USAGE_ARRAY[4] + USAGE_ARRAY[5])) # Calculating the differences DIFF_USE=$((CPU_USE - PREV_CPU_USE)) DIFF_IDLE=$((CPU_IDLE - PREV_CPU_IDLE)) DIFF_TOTAL=$((DIFF_USE + DIFF_IDLE)) DIFF_TIME=$((EPOCH_TIME - PREV_EPOCH_TIME)) printf "\r%s%s Usage: %d (counter = %d)\n" "$(tput el)" "${USAGE_ARRAY[0]}" "$((DIFF_USE*100/(DIFF_TOTAL*DIFF_TIME)))" "$counter" printf "\r%s%s Idle: %d (counter = %d)" "$(tput el)" "${USAGE_ARRAY[0]}" "$((DIFF_IDLE*100/(DIFF_TOTAL*DIFF_TIME)))" "$counter" counter=$((counter + 1)) tput cuu 1 # Assigning the old values to the PREV_* values PREV_CPU_USE=$CPU_USE PREV_CPU_IDLE=$CPU_IDLE PREV_EPOCH_TIME=$EPOCH_TIME # Sleep for one second sleep 1 doneI added an extra counter variable for debugging purposes. It's incremented after each print to inform user that the old line is replaced by the new line on the screen. I've also replaced your calls to echo with printf as it's more portable.
I am trying to write a BASH script which will evaluate and show in the terminal a list of all cores and their current load. I am using the output of the /proc/stat. For example: cat /proc/stat user nice system idle iowait irq softirq steal guest guest_nice cpu 4705 356 584 3699 23 23 0 0 0 0and evaluating the used CPU time by summing the user, nice, system, irq, softirq, steal and the CPU idle time by summing the idle, iowait. Then I am adding the used CPU time + CPU idle time to obtain the total CPU time and dividing the CPU use time to total CPU time. The problem with this methodology is that this is the average CPU usage since the system was last booted. In order to get the current usage, I need to check two times the /proc/stat and use the differences between the total CPU time and used CPU time between the two checks and then to divide the result by the difference in time in seconds between them. For this I am using a while infinite loop complimented with a sleep command. I want to have the output in the following format: CPU: 10% CPU0: 15% CPU1: 5% CPU2: 7% CPU3: 13%And I want the total CPU usage across all cores and the CPU usage per core to update after every sleep automatically. This is my code so far: #!/bin/bashPREV_CPU_USE=0 PREV_CPU_IDLE=0 PREV_EPOCH_TIME=0# Setting the delimiter IFS=$'\n'while true; do # Getting the total CPU usage CPU_USAGE=$(head -n 1 /proc/stat) # Getting the Linux Epoch time in seconds EPOCH_TIME=$(date +%s) # Splitting the /proc/stat output IFS=" " read -ra USAGE_ARRAY <<< "$CPU_USAGE" # Calculating the used CPU time, CPU idle time and CPU total time CPU_USE=$((USAGE_ARRAY[1] + USAGE_ARRAY[2] + USAGE_ARRAY[3] + USAGE_ARRAY[6] + USAGE_ARRAY[7] + USAGE_ARRAY[8] )) CPU_IDLE=$((USAGE_ARRAY[4] + USAGE_ARRAY[5])) # Calculating the differences DIFF_USE=$((CPU_USE - PREV_CPU_USE)) DIFF_IDLE=$((CPU_IDLE - PREV_CPU_IDLE)) DIFF_TOTAL=$((DIFF_USE + DIFF_IDLE)) DIFF_TIME=$((EPOCH_TIME - PREV_EPOCH_TIME)) #Printing the line and ommiting the trailing new line and using carrier trailer to go to the beginning of the line echo -en "${USAGE_ARRAY[0]} Usage: $((DIFF_USE*100/(DIFF_TOTAL*DIFF_TIME)))% \\r\\n" echo -en "${USAGE_ARRAY[0]} Idle: $((DIFF_IDLE*100/(DIFF_TOTAL*DIFF_TIME)))% \\r" # Assigning the old values to the PREV_* values PREV_CPU_USE=$CPU_USE PREV_CPU_IDLE=$CPU_IDLE PREV_EPOCH_TIME=$EPOCH_TIME # Sleep for one second sleep 1 doneHere I have simplified the script and I am actually printing only the current CPU usage and Idle CPU time on two different lines but even though the cpu Idle is remaining on one line the cpu Usage is adding new lines like: cpu Usage: 0% cpu Usage: 0% cpu Usage: 0% cpu Idle: 99%Is there an option to have the cpu Usage on one line only for the whole duration of the script?
Bash Script for evaluation of the CPU usage
Hardware differences may be accounted for by BIOS settings; review the BIOS on each system for differences. This may require downtime, so is best done prior to production launch, e.g. as part of setup tasks. Some vendors provide software tools that actually run on unix to access the BIOS, if available this may be preferable to automate the proper configuration of the BIOS (see vendor website for details). Another option for configuring the BIOS may be to PXE boot the system and run an image containing the BIOS-configuring software.
I have two exactly same bare metal servers with same OS/kernel configuration, but they are running at two different CPU frequencies: the one is fixed at 2300MHz, while the other changes between 1200MHz and 2300MHz. The info in sudo cpupower frequency-info is as below: The first server: analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: Cannot determine or is not supported. hardware limits: 1.20 GHz - 2.50 GHz available cpufreq governors: performance powersave current policy: frequency should be within 1.20 GHz and 2.50 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency: 2.30 GHz (asserted by call to hardware) boost state support: Supported: yes Active: yesThe second server: analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: Cannot determine or is not supported. hardware limits: 1.20 GHz - 2.50 GHz available cpufreq governors: performance powersave current policy: frequency should be within 2.00 GHz and 2.50 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency: 1.20 GHz (asserted by call to hardware) boost state support: Supported: yes Active: yesAnd the cat /proc/cpuinfo is exactly the same, except the cpu MHz line: processor : 31 vendor_id : GenuineIntel cpu family : 6 model : 62 model name : Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz stepping : 4 microcode : 0x428 cpu MHz : 2299.921 cache size : 20480 KB physical id : 1 siblings : 16 core id : 7 cpu cores : 8 apicid : 47 initial apicid : 47 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms bogomips : 4004.73 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:I tried many ways which are introduced by some articles, such as sudo service cpuspeed stop,sudo cpupower frequency-set -g performance, sudo cpupower frequency-set -d 2000000, etc, but nothing happened. It seems that the CPU frequency scaling is not under control at all: the one is runing at max boosted frequency while the driver/governor says powersave, and the other is jumping from 1200MHz while the driver/governor says performance(which should limit the frequency between 2000MHz and 2500MHz). BTW, there is no temperature throttle in both servers. My OS is CentOS release 6.6 (Final), and kernel is 3.10.73, how can I get the control of the CPU frequency scaling things?
How to disable CPU frequency scaling in CentOS 6?
So my question is, what does it do to put my CPU in powersave mode?It's adjusting the clock frequency/core activation so that it uses less power. When you're on the performance governor, it's probably still performing the same work as before, it's just getting through it faster. If your system has fewer resources (such as because you switched governors) being at "40%" no longer means what it did when you were using all your cores at maximum clock frequency. You're at "40%" but it's likely because your overall capacity has shrunk as a result of trying to save power. If you don't like what it picks by default there are tunables for controlling how fast it will react to increased load and how far it'll ramp up/down but that's a little too open ended to give you specific examples. You just have to see what works for you.The usage of my CPU increased and everything slowed down, so is it even helping out at all? The point of CPU governors being a tunable is that the admin (i.e you) is supposed to tell the system what "helping" actually means. In your case, putting it into powersave is you telling the system that you care more about battery life than you do about a fast system.Is it even saving power?You can make note of how much power is being consumed by the CPU's or just make general note of how fast your battery drains. It may be helping, you just have to check to see. The CPU isn't the only thing that consumes power (your display and peripherals can consume a lot as well) but the CPU is all the governor is going to control.
I use indicator-cpufreq with Cinnamon to control my CPU-usage and save power, as my battery life has been running low. So, I use indicator-cpufreq to put my CPU in powersave mode. However, I have this weird issue with Cinnamon, that sometimes when I wake up from suspend, CPU usage skyrockets up to 40% per core, and everything slows down significantly. This happened again with my CPU in powersave mode. However, when I tried putting it in performance mode, everything went back to normal speed, and the CPU usage fell back to normal. So my question is, what does it do to put my CPU in powersave mode? The usage of my CPU increased and everything slowed down, so is it even helping out at all? Is it even saving power? On a side note, according to all the images I see of indicator-cpufreq, it gives multiple options to choose from, whereas I only see the options Performance and Powersave. Why is this?
Does Powersave Mode on the CPU saves power?
Looks at this SOa:https://stackoverflow.com/a/15213255/438544It mentions these three links:http://shallowsky.com/blog/linux/kernel/sysfs-thermal-zone.html http://lwn.net/Articles/268958/ http://www.mjmwired.net/kernel/Documentation/thermal/sysfs-api.txtThey mention that on newer systems you should have all thermal information under: /sys/class/thermal/thermal_zoneN/tempwhere N is a number starting from 0. On my Xubuntu 13.04, I have two: thermal_zone0 thermal_zone1Note that my CPU is quad-core, from cpuid: Processor name string: AMD Phenom(tm) II N950 Quad-Core Processorso it's not giving me temp per-core. It might be that it doesn't even have a temp sensor per-core, but I could not find more information about that. This is, however, the only location where this can be read from that I am aware of. It is, as also mentioned by this answer on the same SOq above:https://stackoverflow.com/a/2440544/438544unlikely to be the same across different computers, Linux distributions, kernel versions, etc. - that is, it's unlikely going to be a one-fit-all solution here. You might need to do it in a few different ways or normalize the results if needed.
I want to make a shell script that runs as a daemon process and every X minutes read the temperature of every cpu core to report it later with GNU plot. And here is my question, Is there any file in /sys or /proc or any other location which this info be uniformly placed across several UNIX systems (not only in Linux)? If not, tell me at least, where can I find these files in Linux.
Where can I find cpu temperature and frequency without any specific command?
I have been able to make the corei5 machine behave more appropriately, and only scale up the frequency of the cores that we actually have work for. To do so, I found that I had to switch the intel_pstate driver mode. In /sys/devices/system/cpu/intel_pstate/status you can switch between active and passive modes. active mode: In this mode the driver bypasses the scaling governors layer of CPUFreq and provides its own scaling algorithms for P-state selection. passive mode: ...the driver behaves like a regular CPUFreq scaling driver. That is, it is invoked by generic scaling governors when necessary to talk to the hardware in order to change the P-state of a CPU... If I read this correctly, I think in active mode the CPU has less feedback from the OS itself. I think the CPU incorrectly guesses that all cores need to be throttled up. What is still unexplained, is the fact that the Xeon CPU does just fine in active move using the same intel_pstate driver. But that could just be a guess of not all CPUs being created equal. Or maybe even a suspect motherboard or BIOS that doesn't properly do pstates.
I have two machines on which I run the same OS, the same kernel, the same CPU freq scaling driver, and the same CPU freq scaling governor. One will boost all core frequencies if one process pins a core. One will boost only one physical core if there is one process pinning a core. Machine A CPU: Intel(R) Xeon(R) W-2140B CPU @ 3.20GHz Scaling driver: intel_pstate Scaling governor: powersave OS: Ubuntu 21.10 Kernel: 5.13.0-22-genericMachine B CPU: 11th Gen Intel(R) Core(TM) i5-11600K @ 3.90GHz Scaling driver: intel_pstate Scaling governor: powersave OS: Ubuntu 21.10 Kernel: 5.13.0-22-genericObserve these examples: Pinning two cores on machine A: (NOTE that this machine has Hyperthreading, so scales up 4 of the 16 available virtual cores.)Pinning two cores on machine B: (NOTE that this machine has 6 cores with HT, and scales up all 12 virtual cores.)The screenshots are of freqtop, an open sourced frequency monitor that I wrote myself. The orange ticks show the load on each core defined as a percentage of user+kernel cycles in relation of total cycles (user+kernel+idle.) Why does the i5-11600K CPU throttle up all cores, even if there is demand for just one? UPDATE: A difference I found between machines is the number of pstates. For the Xeon /sys/devices/system/cpu/intel_pstate is set to 33, and the corei5 has it set to 42. I am not sure what the significance is for this.
Frequency scaling governor scales up too many cores
"CPU temperatures in linux: throttling or wrong reading?" is helpful with regard to frequency scaling and it actually solves the issue, I've tried with the maximum 2.5 GHz on both laptop and desktop and the laptop performs considerably better than the desktop with 2.5GHz.Laptop never exceeded 80 degree while rendering a 16 min long video in kdenlive. On the other hand the desktop reached critical points: temp1 of PCI Adapter reached 85 and temp2 of ISA Adapter reached 93 several times while rendering.EDIT At 2.1 GHz Desktop is better than the Laptop! Core temperatures of the laptop were around 73C and never reached 75C whereas on the Desktop temp1 of PCI Adapter was around 56C, never reached 60 and the temp2 of ISA Adapter was around 65C, never reached 70C! At this point, there's no difference between Windows 10 and Linux on these machines in terms of render time! The only difference on the laptop is Windows 10 would make some sound (like tapping on plastic) atleast once during rendering, I didn't hear that sound on Linux! However, I do hear the same sound on Linux If I disable intel_pstate!
I've a Desktop and a Laptop with overheating issue and as far as I've known from Arch Wiki and other contributors on this site, I've to limit the cpu frequency to resolve the issue. On both system I've installed cpupower and sudo cpupower frequency-info on the Desktop with AMD Phenom(tm) II X4 955 Processor returns: analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 4.0 us hardware limits: 800 MHz - 3.20 GHz available frequency steps: 3.20 GHz, 2.50 GHz, 2.10 GHz, 800 MHz available cpufreq governors: performance schedutil current policy: frequency should be within 800 MHz and 3.20 GHz. The governor "schedutil" may decide which speed to use within this range. current CPU frequency: 800 MHz (asserted by call to hardware) boost state support: Supported: no Active: no Boost States: 0 Total States: 4 Pstate-P0: 3200MHz Pstate-P1: 2500MHz Pstate-P2: 2100MHz Pstate-P3: 800MHzand on the laptop with Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz returns: analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: Cannot determine or is not supported. hardware limits: 800 MHz - 3.50 GHz available cpufreq governors: performance powersave current policy: frequency should be within 800 MHz and 3.50 GHz. The governor "powersave" may decide which speed to use within this range. current CPU frequency: Unable to call hardware current CPU frequency: 1.70 GHz (asserted by call to kernel) boost state support: Supported: yes Active: yesOn the laptop, there's nothing with the word overclocking in BIOS BUT on the Desktop there's an Overclocking Profile and the contents inside that are: Overclocking Profile 1 [None] Overclocking Profile 2 [None] Overclocking Profile 3 [None] Overclocking Profile 4 [None] Overclocking Profile 5 [None] Overclocking Profile 6 [None]OC Retry Count [3]lsmod | grep freq on the desktop returns: pcc_cpufreq 16384 0 acpi_cpufreq 24576 0and the same on the laptop returns: pcc_cpufreq 16384 0So, on laptop I, first, have to: echo 1 > /sys/devices/system/cpu/intel_pstate/no_turboto disable boost and then on both laptop and desktop, I've to set the limit like this: cpupower frequency-set -u 3.00 GHz cpupower frequency-set -d 2.50 GHzDo I have to enclose 3.00 GHz and 2.50 GHz with "" or '' or it'd be with underscore like 3.00_GHz and 2.50_GHz? Do I have to do anything on BIOS for the desktop? What are those available frequency steps on the desktop? Should I choose the values specified there for cap and floor for the desktop? Looks like there's no such frequency steps on the laptop so am I free to choose any value in between 800 MHz and 3.50 GHz for the laptop? What does current CPU frequency: Unable to call hardware mean for the laptop?
Scaling CPU frequency
Fixed, I went back to a kernel with a custom DSDT override, this enabled the cpu frequency.
on most CPU you can use "cpufreq-aperf" to check cpu frequency, but I don't think this is compatible with AMDGPU. I did check for "aperf" and found: $ cat /proc/cpuinfo | grep -o "aperf[a-z]*" | head -1 aperfmperfAttempt to use "cpufreq-aperf": $ sudo cpufreq-aperf Error reading /dev/cpu/0/msr, load/enable msr.ko$ sudo modprobe msr$ sudo cpufreq-aperf CPU Average freq(KHz) Time in C0 Time in Cx C0 percentage 000 [offline] 001 [offline] 002 [offline] 003 [offline]Am I missing something, or is this tool incompatible with AMDGPU? (I have an A10-8700P). The fact that "msr" is not autoloaded makes me thing it's not compatible. So is there another tool I can use, or am I missing something? Reason I want this, is i've enabled powerplay and I'd like someway to see the effect (other than running benchmarks). Update, I think it maybe related to a module I have loaded that I may need to remove? These are my modules; Module Size Used by ablk_helper 16384 1 aesni_intel ac 16384 0 aesni_intel 167936 67520 aes_x86_64 20480 1 aesni_intel ahci 36864 3 amdgpu 1327104 3 amdkfd 122880 1 arc4 16384 2 autofs4 36864 2 battery 16384 0 binfmt_misc 20480 1 button 16384 0 ccm 20480 2 cfg80211 471040 3 iwlmvm,iwlwifi,mac80211 crc16 16384 1 ext4 crc32_pclmul 16384 0 crct10dif_pclmul 16384 0 cryptd 20480 22508 ablk_helper,ghash_clmulni_intel,aesni_intel ctr 16384 4 drm 286720 7 amdgpu,ttm,drm_kms_helper drm_kms_helper 122880 1 amdgpu ecb 16384 2 ecryptfs 90112 1 efi_pstore 16384 0 efivarfs 16384 1 efivars 20480 1 efi_pstore ehci_hcd 77824 1 ehci_pci ehci_pci 16384 0 evdev 24576 17 ext4 499712 1 fam15h_power 16384 0 fat 65536 1 vfat gf128mul 16384 1 lrw ghash_clmulni_intel 16384 0 glue_helper 16384 1 aesni_intel hid 106496 3 i2c_hid,hid_generic,usbhid hid_generic 16384 0 hp_accel 28672 0 hp_wireless 16384 0 i2c_algo_bit 16384 1 amdgpu i2c_core 53248 8 i2c_hid,i2c_piix4,i2c_designware_core,i2c_algo_bit,amdgpu,i2c_designware_platform,drm_kms_helper,drm i2c_designware_core 20480 1 i2c_designware_platform i2c_designware_platform 16384 0 i2c_hid 20480 0 i2c_piix4 24576 0 input_polldev 16384 1 lis3lv02d ip6table_filter 16384 0 ip6_tables 24576 1 ip6table_filter iptable_filter 16384 1 ip_tables 24576 1 iptable_filter ipt_REJECT 16384 3 irqbypass 16384 1 kvm iwlmvm 266240 0 iwlwifi 147456 1 iwlmvm jbd2 90112 1 ext4 joydev 20480 0 k10temp 16384 0 kvm 495616 1 kvm_amd kvm_amd 69632 0 libahci 28672 1 ahci libata 204800 2 ahci,libahci lis3lv02d 20480 1 hp_accel lp 20480 0 lrw 16384 1 aesni_intel mac80211 569344 1 iwlmvm mbcache 16384 2 ext4 mfd_core 16384 1 rtsx_pci mmc_core 118784 1 rtsx_pci_sdmmc msr 16384 0 nf_conntrack 90112 2 nf_conntrack_ipv4,xt_conntrack nf_conntrack_ipv4 20480 2 nf_defrag_ipv4 16384 1 nf_conntrack_ipv4 nf_reject_ipv4 16384 1 ipt_REJECT nls_cp437 20480 1 nls_utf8 16384 1 parport 40960 3 lp,parport_pc,ppdev parport_pc 28672 0 pci_stub 16384 1 ppdev 20480 0 processor 36864 4 psmouse 40960 0 rfkill 20480 4 cfg80211 rtsx_pci 49152 1 rtsx_pci_sdmmc rtsx_pci_sdmmc 24576 0 scsi_mod 188416 3 sd_mod,libata,sg sd_mod 40960 4 serio_raw 16384 0 sg 32768 0 shpchp 32768 0 snd 73728 18 snd_hda_intel,snd_hwdep,snd_hda_codec,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_codec_realtek,snd_pcm snd_hda_codec 102400 4 snd_hda_intel,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_codec_realtek snd_hda_codec_generic 65536 1 snd_hda_codec_realtek snd_hda_codec_hdmi 45056 1 snd_hda_codec_realtek 69632 1 snd_hda_core 61440 5 snd_hda_intel,snd_hda_codec,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_codec_realtek snd_hda_intel 32768 5 snd_hwdep 16384 1 snd_hda_codec snd_pcm 86016 4 snd_hda_intel,snd_hda_codec,snd_hda_core,snd_hda_codec_hdmi snd_timer 28672 1 snd_pcm soundcore 16384 1 snd sp5100_tco 16384 0 sparse_keymap 16384 0 sunrpc 274432 1 thermal 20480 0 tpm 36864 2 tpm_tis,tpm_tis_core tpm_tis 16384 0 tpm_tis_core 20480 1 tpm_tis ttm 81920 1 amdgpu usb_common 16384 1 usbcore usbcore 208896 5 usbhid,ehci_hcd,xhci_pci,xhci_hcd,ehci_pci usbhid 49152 0 vboxdrv 380928 3 vboxnetadp,vboxnetflt,vboxpci vboxnetadp 28672 0 vboxnetflt 28672 0 vboxpci 24576 0 vfat 20480 1 xhci_hcd 167936 1 xhci_pci xhci_pci 16384 0 x_tables 28672 7 ipt_REJECT,ip_tables,iptable_filter,xt_tcpudp,ip6table_filter,xt_conntrack,ip6_tables xt_conntrack 16384 2 xt_tcpudp 16384 2
AMDGPU CPU Query Frequency
Turns out the problem was that the BIOS hadn't detected the CPU properly when it was first installed, and resetting the BIOS settings to the default fixed the problem. This was suggested by Intel support, and surprisingly enough it worked. So it looks like the fantastic VisualBIOS is as buggy, if not more so, than the traditional BIOS setup! After the reset i7z then showed the multipliers for 1/2/3/4 cores as 39x/38x/37x/37x as sort of expected, although I didn't realise until now that Intel's turbo boost maximum speed only applies when a single core is active. I tried adjusting the turbo boost multipliers in the BIOS setup (hint: use the keyboard navigation, you can get to settings that you can't select with the mouse) and setting this to 45 made i7z report the turbo boost multipliers as 45x/45x/45x/45x. However the multiplier still won't go above 37x when four cores are active, so it looks like this setting can only be reduced, not increased. Shame!
While staring at a terminal waiting for my code to compile, I started to wonder whether Intel's Turbo Boost was actually working. I have an i7-4770K which is rated at 3.5GHz, with Turbo Boost up to 3.9GHz. Doing some reading I discovered that Turbo Boost is only really used when one core is doing more work than the others, so as compiling in parallel uses all the processor cores, Turbo Boost won't activate for me - so much for that. However while I was investigating this, I noticed that my processor was reporting its maximum speed as 3.2GHz, and while all four cores (eight threads) were compiling, the maximum speed reported by i7z is only 2.992GHz. Why would this be, when the base speed is supposed to be 3.5GHz? Socket [0] - [physical cores=4, logical cores=8, max online cores ever=4] TURBO ENABLED on 4 Cores, Hyper Threading ON Max Frequency without considering Turbo 3091.73 MHz (99.73 x [31]) Max TURBO Multiplier (if Enabled) with 1/2/3/4 Cores is 32x/32x/31x/30x Real Current Frequency 2992.01 MHz [99.73 x 30.00] (Max of below) Core [core-id] :Actual Freq (Mult.) C0% Halt(C1)% C3 % C6 % C7 % Temp VCore Core 1 [0]: 2992.01 (30.00x) 100 1 0 0 0 54 0.9540 Core 2 [1]: 2992.00 (30.00x) 100 1 0 0 0 59 0.9515 Core 3 [2]: 2992.00 (30.00x) 100 1 0 0 0 57 0.9517 Core 4 [3]: 2992.00 (30.00x) 100 1 0 0 0 56 0.9540$ cat /proc/cpuinfo model name : Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz cpu MHz : 3000.351$ cat /sys/bus/cpu/devices/cpu0/cpufreq/cpuinfo_max_freq 3200000I tried changing the cpufreq governor from powersave to performance but still the maximum speed is reported at only 3.2GHz, and i7z only reports the processors running at 2992MHz at full load. (They do go just above 3.1GHz while mostly idle though.) Are there any configuration options I can adjust to get the processor up to 3.5GHz? Are there any other reasons why the CPU might be slowing down? Idle temperatures are just below 50 degrees and I've never seen it go above 65, even when compiling for a long time, so temperature shouldn't be a problem.
Linux is reporting the CPU as too slow?
If you don't have such a setting in your BIOS, the solution is solved quite well here for Linux: http://notepad2.blogspot.com/2014/11/a-script-to-turn-off-intel-cpu-turbo.html I created an enhanced version of that script to toggle the turbo boost here on GitHub: https://github.com/rubo77/intel-turbo-boostOld version: just create a /usr/local/sbin/turbo-boost.sh script: #!/bin/bashis_root () { return $(id -u) }has_sudo() { local prompt prompt=$(sudo -nv 2>&1) if [ $? -eq 0 ]; then # has_sudo__pass_set return 0 elif echo $prompt | grep -q '^sudo:'; then # has_sudo__needs_pass" return 0 else echo "no_sudo" return 1 fi }if ! is_root && ! has_sudo; then echo "Error: need to call this script with sudo or as root!" exit 1 fimodprobe msr if [[ -z $(which rdmsr) ]]; then echo "msr-tools is not installed. Run 'sudo apt-get install msr-tools' to install it." >&2 exit 1 fiif [[ ! -z "$1" && "$1" != "toggle" && "$1" != "enable" && "$1" != "disable" ]]; then echo "Invalid argument: $A" >&2 echo "" echo "Usage: $(basename $0) [disable|enable|toggle]" exit 1 fiA=$1 cores=$(cat /proc/cpuinfo | grep processor | awk '{print $3}') initial_state=$(rdmsr -p1 0x1a0 -f 38:38) for core in $cores; do if [[ $A == "toggle" ]]; then echo -n "state was " if [[ $initial_state -eq 1 ]]; then echo "disabled" A="enable" else echo "enabled" A="disable" fi fi if [[ $A == "disable" ]]; then wrmsr -p${core} 0x1a0 0x4000850089 fi if [[ $A == "enable" ]]; then wrmsr -p${core} 0x1a0 0x850089 fi state=$(rdmsr -p${core} 0x1a0 -f 38:38) if [[ $state -eq 1 ]]; then echo "core ${core}: disabled" else echo "core ${core}: enabled" fi donegive it chmod +x /usr/local/sbin/turbo-boost.shNow you can call sudo turbo-boost.sh disable sudo turbo-boost.sh enable sudo turbo-boost.sh toggleautomatically disable turbo-boost on startup If you want to autostart this one minute after boot, you can allow the execution without password in /etc/sudoers with: # Allow command for my user without password my_username_here ALL = NOPASSWD: /usr/local/sbin/turbo-boost.shThen create a systemd startup script with a delay of 60 seconds: Create the script /etc/systemd/system/turbo-boost-disable.service: [Unit] Description=disables turbo-boost[Service] TimeoutStartSec=infinity ExecStartPre=/bin/sleep 60 ExecStart=/usr/local/sbin/turbo-boost.sh disable[Install] WantedBy=default.targetUpdate systemd with: sudo systemctl daemon-reload sudo systemctl enable turbo-boost-disableAdd toggle button on desktop If you more often want to control the turbo-boost manually, you can add a Button to your desktop:sudo gedit /usr/share/applications/toggle-turbo-boost.desktop[Desktop Entry] Version=1.0 Type=Application Terminal=true Name=toggle turbo-boost Icon=/usr/share/icons/Humanity/apps/64/gkdebconf-icon.svg Exec=sudo /usr/local/sbin/turbo-boost.sh toggle X-MultipleArgs=false Categories=GNOME;GTK; StartupNotify=true GenericName=Toggle Turbo-Boost Path=/tmp/press SUPER and search for "Toggle Turbo Boost", you will see the icon. press ENTER to execute, or right click to "Add to Favorites" which will add a button in the quick-start bar.
I would like to throttle my CPU, I have an i5-8265U and it has frequencies up to 3.9GHz, but I rarely need the speed. Now if something causes a high load, the CPU goes up and the fan gets noisy. It is already set to powersave $ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor powersaveAnd userspace is not available, when I try sudo cpufreq-set -f 2.0How do I throttle such a CPU to max 2GHz?
Limit an Intel i5 CPU 8th generation
Provided you are able to determine that the BIOS does not get any good reason for limiting the frequency, in short : YOU KNOW WHAT YOUARE DOING : You can have linux ignoring the bios limit. change /sys/module/processor/parameters/ignore_ppc from 0 to 1. This will be reset at next reboot, if you want to set it permanently, simply add processor.ignore_ppc=1 to your boot command line.
I am trying to set all my CPUs to the maximum GHz capability. But for some reason even though the CPU is capable of 5.76GHz, when I set it to performance it only get's to 4.2GHz. Looking at the scaling_max_freq it all set to 4.2GHz, which I guess could be the issue. cat /sys/devices/system/cpu/cpufreq/policy*/scaling_max_freq 4200000However when I try to change it using the following method, I won't budge to anything higher. echo 5700000 |tee /sys/devices/system/cpu/cpu*/cpufreq/scalingI have also tried using cpufreq-set cpufreq-set -c 0 -d 5.7GHzOS - Ubuntu 22.04.3 LTSKernel - 5.15.0-89-genericCPU details: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits vir tual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 7950X3D 16-Core P rocessor CPU family: 25 Model: 97 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 Stepping: 2 Frequency boost: enabled CPU max MHz: 5758.5928 CPU min MHz: 3000.0000UPDATE: I can see that for some reason Ubuntu seems to think it should limit my CPU. cat /sys/devices/system/cpu/cpu1/cpufreq/bios_limit 4200000
Cannot set CPU to maximum speed
Peripherals are connected to the main processor via a bus. Some bus protocols support enumeration (also called discovery), i.e. the main processor can ask “what devices are connected to this bus?” and the devices reply with some information about their type, manufacturer, model and configuration in a standardized format. With that information, the operating system can report the list of available devices and decide which device driver to use for each of them. Some bus protocols don't support enumeration, and then the main processor has no way to find out what devices are connected other than guessing. All modern PC buses support enumeration, in particular PCI (the original as well as its extensions and successors such as AGP and PCIe), over which most internal peripherals are connected, USB (all versions), over which most external peripherals are connected, as well as Firewire, SCSI, all modern versions of ATA/SATA, etc. Modern monitor connections also support discovery of the connected monitor (HDMI, DisplayPort, DVI, VGA with EDID). So on a PC, the operating system can discover the connected peripherals by enumerating the PCI bus, and enumerating the USB bus when it finds a USB controller on the PCI bus, etc. Note that the OS has to assume the existence of the PCI bus and the way to probe it; this is standardized on the PC architecture (“PC architecture” doesn't just mean an x86 processor: to be a (modern) PC, a computer also has to have a PCI bus and has to boot in a certain way). Many embedded systems use less fancy buses that don't support enumeration. This was true on PC up to the mid-1990s, before PCI overtook ISA. Most ARM systems, in particular, have buses that don't support enumeration. This is also the case with some embedded x86 systems that don't follow the PC architecture. Without enumeration, the operating system has to be told what devices are present and how to access them. The device tree is a standard format to represent this information. The main reason PC buses support discovery is that they're designed to allow a modular architecture where devices can be added and removed, e.g. adding an extension card into a PC or connecting a cable on an external port. Embedded systems typically have a fixed set of devices¹, and an operating system that's pre-loaded by the manufacturer and doesn't get replaced, so enumeration is not necessary. ¹ If there's an external bus such as USB, USB peripherals are auto-discovered, they wouldn't be mentioned in the device tree.
When booting a kernel in an embedded device, you need to supply a device tree to the Linux kernel, while booting a kernel on a regular x86 pc doesn't require a device tree -- why? As I understand, on an x86 pc the kernel "probes" for hardware (correct me if I'm wrong), so why can't the kernel probe for hardware in and embedded system?
Why do embedded systems need device tree while pcs don't?
The device tree is exposed as a hierarchy of directories and files in /proc. You can cat the files, eg: find /proc/device-tree/ -type f -exec head {} + | lessBeware, most file content ends with a null char, and some may contain other non-printing characters.
I am using an embedded Arm with a Debian build. How does one list the compiled devices from the device tree? I want to see if a device is already supported. For those reading this, the "Device Tree" is a specification/standard for adding devices to an (embedded) Linux kernel.
How to list the kernel Device Tree [duplicate]
I'm way late on this, but I implemented this script and I'll address this for anyone who finds this using an internet search engine. This computer on module can be put on almost any off the shelf TS or custom baseboard, and we wanted it to automatically work without users having to adjust the device tree used. We have an 8-input shift register on any given carrier board with a unique id for the baseboard. On the TS-8550, this is 0x13. http://wiki.embeddedarm.com/wiki/TS-4900#Baseboard_ID So in U-Boot the bbdetect command we added reads the GPIO connected to this shift register and sets a $baseboardid environment variable. U-Boot will first attempt to load a baseboard specific device tree at /boot/imx6${cpu}-ts4900-${baseboardid}.dtb. If it fails to find one, it will use the fallback device tree at /boot/imx6${cpu}-ts4900.dtb. This latter file has sane defaults that will work on any carrier board. The TS-8550 doesn't need a baseboard specific carrier board so it falls back to the standard device tree and continues to boot. To answer your original question, cat /proc/device-tree/modelAll of our device trees will have a slightly different model in the device tree. For example, the safe fallback is:"Technologic Systems i.MX6 Quad TS-4900 (Default Device Tree)"Or the TS-TPC-8390 carrier board with a specific device tree:"Technologic Systems i.MX6 Quad TS-4900 (TS-TPC-8390)"
I'm working with TS-4900, an embedded 'Computer on Module' plugged into a baseboard, running Yocto Linux. It uses U-Boot to start, and supposedly basing on the model of the baseboard it chooses the right dtb file to start, and possibly if it fails to locate the right one it falls back to a 'generic' one for my module. But how/where does it determine the right one? How can I tell which .dtb was used, or set which one should be used? Below are the boot messages of U-Boot. U-Boot 2014.10-g3ac6ec3 (Jan 29 2015 - 17:20:15)CPU: Freescale i.MX6SOLO rev1.1 at 792 MHz Reset cause: POR Board: TS-4900 Revision: C Watchdog enabled I2C: ready DRAM: 1 GiB MMC: FSL_SDHC: 0, FSL_SDHC: 1 SF: Detected N25Q64 with page size 256 Bytes, erase size 4 KiB, total 8 MiB *** Warning - bad CRC, using default environmentIn: serial Out: serial Err: serial Net: using phy at 7 FEC [PRIME] Press Ctrl+C to abort autoboot in 1 second(s) (Re)start USB... USB0: Port not available. USB1: USB EHCI 1.00 scanning bus 1 for devices... 2 USB Device(s) found scanning usb for storage devices... 0 Storage Device(s) found No storage devices, perhaps not 'usb start'ed..? Booting from the eMMC ... ** File not found /boot/boot.ub ** ** File not found /boot/imx6dl-ts4900-13.dtb ** Booting default device tree 42507 bytes read in 196 ms (210.9 KiB/s) 118642 bytes read in 172 ms (672.9 KiB/s) ICE40 FPGA reloaded successfully 4609784 bytes read in 337 ms (13 MiB/s) ## Booting kernel from Legacy Image at 12000000 ... Image Name: Linux-3.10.17-1.0.0-technologic+ Image Type: ARM Linux Kernel Image (uncompressed) Data Size: 4609720 Bytes = 4.4 MiB Load Address: 10008000 Entry Point: 10008000 Verifying Checksum ... OK ## Flattened Device Tree blob at 18000000 Booting using the fdt blob at 0x18000000 EHCI failed to shut down host controller. Loading Kernel Image ... OK Using Device Tree in place at 18000000, end 1800d60aStarting kernel ...[ 0.000000] Booting Linux on physical CPU 0x0(Kernel startup commences...)
How do I tell which device tree blob (dtb file) I'm using?
A working solution to get the driver to bind to the device is: cgublock: jz4780-cgublock@10000000 { compatible = "simple-bus", "syscon"; #address-cells = <1>; #size-cells = <1>; reg = <0x10000000 0x100>; ranges; cgu: jz4780-cgu@10000000 { compatible = "ingenic,jz4780-cgu"; reg = <0x10000000 0x100>; clocks = <&ext>, <&rtc>; clock-names = "ext", "rtc"; #clock-cells = <1>; }; rng: rng@d8 { compatible = "ingenic,jz4780-rng"; reg = <0x100000d8 0x8>; }; };This was found by staring at other examples. I would prefer a solution where I get a proper diagnosis why the previous attempt is incorrect.
I am trying to figure out why the following device is not setup to its driver on my Creator CI20. For reference I am using a Linux kernel v4.13.0 and doing the compilation locally: make ARCH=mips ci20_defconfig make -j8 ARCH=mips CROSS_COMPILE=mipsel-linux-gnu- uImageFrom the running system I can see: ci20@ci20:~# find /sys | grep rng /sys/firmware/devicetree/base/jz4780-cgu@10000000/rng@d8 /sys/firmware/devicetree/base/jz4780-cgu@10000000/rng@d8/compatible /sys/firmware/devicetree/base/jz4780-cgu@10000000/rng@d8/name /sys/bus/platform/drivers/jz4780-rng /sys/bus/platform/drivers/jz4780-rng/bind /sys/bus/platform/drivers/jz4780-rng/unbind /sys/bus/platform/drivers/jz4780-rng/ueventSo the device is seen by the kernel at runtime, now the missing piece is why the driver is never binded ? I would have expected something like this: /sys/bus/platform/drivers/jz4780-rng/100000d8.rngI did find some other posts explaining how to debug a running system, such as:https://stackoverflow.com/questions/28406776/driver-binding-using-device-tree-without-compatible-string-in-the-driver https://stackoverflow.com/questions/35580862/device-tree-mismatch-probe-never-called https://stackoverflow.com/questions/41446737/platform-device-driver-autoloading-mechanism Is it possible to get the information for a device tree using /sys of a running kernel?While the information is accurate on those posts, it is not very helpful for me. Since I am building locally my kernel (I added printk in the probe function of jz4780-rng driver), my question is instead: what option should I turn on at compile time so that the kernel prints an accurate information on its failure to call the probe function for the jz4780-rng driver ? In particular how do I print the complete list of the tested bus/driver for driver_probe_device ?I am ok to add printk anywhere in the code to debug this. The question is rather: which function is traversing the device tree and calling the probe/init function ? For reference: $ dtc -I fs -O dts /sys/firmware/devicetree/base | grep -A 1 rng rng@d8 { compatible = "ingenic,jz4780-rng"; };compatible string is declared as: cgu: jz4780-cgu@10000000 { compatible = "ingenic,jz4780-cgu", "syscon"; reg = <0x10000000 0x100>; clocks = <&ext>, <&rtc>; clock-names = "ext", "rtc"; #clock-cells = <1>; rng: rng@d8 { compatible = "ingenic,jz4780-rng"; }; };And in the driver as: static const struct of_device_id jz4780_rng_dt_match[] = { { .compatible = "ingenic,jz4780-rng", }, { }, }; MODULE_DEVICE_TABLE(of, jz4780_rng_dt_match);static struct platform_driver jz4780_rng_driver = { .driver = { .name = "jz4780-rng", .of_match_table = jz4780_rng_dt_match, }, .probe = jz4780_rng_probe, .remove = jz4780_rng_remove, }; module_platform_driver(jz4780_rng_driver);Update1: When I build my kernel with CONFIG_DEBUG_DRIVER=y, here is what I can see: # grep driver_probe_device syslog Sep 6 10:08:07 ci20 kernel: [ 0.098280] bus: 'platform': driver_probe_device: matched device 10031000.serial with driver ingenic-uart Sep 6 10:08:07 ci20 kernel: [ 0.098742] bus: 'platform': driver_probe_device: matched device 10033000.serial with driver ingenic-uart Sep 6 10:08:07 ci20 kernel: [ 0.099209] bus: 'platform': driver_probe_device: matched device 10034000.serial with driver ingenic-uart Sep 6 10:08:07 ci20 kernel: [ 0.106945] bus: 'platform': driver_probe_device: matched device 1b000000.nand-controller with driver jz4780-nand Sep 6 10:08:07 ci20 kernel: [ 0.107282] bus: 'platform': driver_probe_device: matched device 134d0000.bch with driver jz4780-bch Sep 6 10:08:07 ci20 kernel: [ 0.107470] bus: 'platform': driver_probe_device: matched device 16000000.dm9000 with driver dm9000 Sep 6 10:08:07 ci20 kernel: [ 0.165618] bus: 'platform': driver_probe_device: matched device 10003000.rtc with driver jz4740-rtc Sep 6 10:08:07 ci20 kernel: [ 0.166177] bus: 'platform': driver_probe_device: matched device 10002000.jz4780-watchdog with driver jz4740-wdt Sep 6 10:08:07 ci20 kernel: [ 0.170930] bus: 'platform': driver_probe_device: matched device 1b000000.nand-controller with driver jz4780-nandBut only: # grep rng syslog Sep 6 10:08:07 ci20 kernel: [ 0.166842] bus: 'platform': add driver jz4780-rng Sep 6 10:08:42 ci20 kernel: [ 54.584451] random: crng init doneAs a side note, the rng toplevel node: cgu is not referenced here, but there is a jz4780-cgu driver.Update2: If I move the rng node declaration outside the toplevel cgu node, I can at least see some binding happening at last: # grep rng /var/log/syslog Sep 6 10:30:57 ci20 kernel: [ 0.167017] bus: 'platform': add driver jz4780-rng Sep 6 10:30:57 ci20 kernel: [ 0.167033] bus: 'platform': driver_probe_device: matched device 10000000.rng with driver jz4780-rng Sep 6 10:30:57 ci20 kernel: [ 0.167038] bus: 'platform': really_probe: probing driver jz4780-rng with device 10000000.rng Sep 6 10:30:57 ci20 kernel: [ 0.167050] jz4780-rng 10000000.rng: no pinctrl handle Sep 6 10:30:57 ci20 kernel: [ 0.167066] devices_kset: Moving 10000000.rng to end of list Sep 6 10:30:57 ci20 kernel: [ 0.172774] jz4780-rng: probe of 10000000.rng failed with error -22 Sep 6 10:31:32 ci20 kernel: [ 54.802794] random: crng init doneUsing: rng: rng@100000d8 { compatible = "ingenic,jz4780-rng"; };I can also verify: # find /sys/ | grep rng /sys/devices/platform/10000000.rng /sys/devices/platform/10000000.rng/subsystem /sys/devices/platform/10000000.rng/driver_override /sys/devices/platform/10000000.rng/modalias /sys/devices/platform/10000000.rng/uevent /sys/devices/platform/10000000.rng/of_node /sys/firmware/devicetree/base/rng@100000d8 /sys/firmware/devicetree/base/rng@100000d8/compatible /sys/firmware/devicetree/base/rng@100000d8/status /sys/firmware/devicetree/base/rng@100000d8/reg /sys/firmware/devicetree/base/rng@100000d8/name /sys/bus/platform/devices/10000000.rng /sys/bus/platform/drivers/jz4780-rng /sys/bus/platform/drivers/jz4780-rng/bind /sys/bus/platform/drivers/jz4780-rng/unbind /sys/bus/platform/drivers/jz4780-rng/uevent
How to debug a driver failing to bind to a device on Linux?
You don't need to do this. With this change, overlays are in u-boot! https://github.com/u-boot/u-boot/commit/e6628ad7b99b285b25147366c68a7b956e362878 Enjoy :)
My board boots via U-Boot and AFAIK that bootloader does not support device tree overlays, so I'm probably forced to generate a single, static .dtb will all relevant overlays (and settings??) already applied to it. In principle that would be okay for me, but how to do that? Is there some command line tool that takes .dtb and .dtbo files resp. .dts and .dtsi files and combines them into a single .dtb / .dts? dtc doesn't seem to do that job. The ultimate goal is to get I²C working on a Raspberry B+ that boots via U-Boot.
how to merge device tree overlays to a single .dtb at build time?
The Device Tree is supposed to be a stable ABI so a device tree written for any version of the kernel should work with any following kernel version. However, for practical reasons, this is quite often not the case. You can have a look at the following presentation from Thomas, explaining why: http://free-electrons.com/pub/conferences/2015/elc/petazzoni-dt-as-stable-abi-fairy-tale/petazzoni-dt-as-stable-abi-fairy-tale.pdf Video: https://www.youtube.com/watch?v=rPRqIS9q6CY
I was asking myself if a certain dtb that works with linux kernel version 3.18 is compatible with a linux kernel version 4.9. I suppose not, because kernel code concerning the device tree likely changes over the time, but it somehow has to be compatible otherwise multiple dts/dtsi files have to change all the time. I used google to investigate this, but even in the official documentation I couldn't find a word about compatibility throughout different kernel versions.
Is a device tree blob tied to a specific linux kernel version?
The issue with your clocks, is that clocks declared outside the TI clock domains are not parsed and set up correctly in 3.17. This issue is resolved in kernel version 4.0.5. The required changes occurred in the function omap_clk_init at the end of /arch/arm/mach-omap2/io.c, there is an extra call there to of_clk_init(NULL) which doesn't exist in 3.17. Some relevant discussion here, http://patchwork.ozlabs.org/patch/375753/
I'm using a DTS file for a Duovero Parlor board. To this board I've added some SPI devices. My first (a display) works perfectly so I have that entry correct at least. I want to add an entry to support the SPI connected NXP SC16IS752 UART controller. (There's been a patches on lkml recently I want to try). This is my entry: clocks { clk14m: oscillator { #clock-cells = <0>; compatible = "fixed-clock"; clock-frequency = <14745600>; }; };&mcspi4 { sc16is752: sc16is752@0 { compatible = "nxp,sc16is752"; reg = <0>; spi-max-frequency = <4000000>; clocks = <&clk14m>; interrupt-parent = <&gpio4>; interrupt = <15 IRQ_TYPE_EDGE_FALLING>; gpio-controller; #gpio-cells = <2>; }; };It looks vaguely right. The SPI bus is 4MHz, mode 0. Interrupt is GPIO 111 which is <&GPIO 4 15>. My problem is specifying the clock. It's a standalone crystal oscillator connected directly to the chip. So is that clocks but right? Because the clock is standalone I've no idea where to place it so "clocks" sound right but I'm totally guessing. When I compile the dts it fails with a syntax error though so something is wrong somewhere. I'm not sure if the #gpio-cells is correct either. Does that mean the gpio numbering will start at 200 and go up?
Clocks entry in SPI device tree entry
In Linux, network interfaces don't have a device node in /dev at all. If you need the list of usable network interfaces e.g. in a script, look into /sys/class/net/ directory; you'll see one symbolic link per interface. Each network interface that has a driver loaded will be listed. Programmatically, you can use the if_nameindex() syscall: see this answer on Stack Overflow. Also, note that /dev is the device filesystem. The device-tree has a specific different meaning: it is a machine-readable description of a system's hardware composition. It is used on systems that don't have Plug-and-Play capable hardware buses, or otherwise have hardware that cannot be automatically discovered. As an example, Linux on ARM SoCs like Raspberry Pi uses a device tree. The boot sequence of a RasPi is quite interesting: see this question on RasPi.SE. In short, at boot time, under control of the /boot/start.elf file, the GPU loads the appropriate /boot/*.dtb and /boot/overlay/*.dtbo files before the main ARM CPU is started. The *.dtb file is the compiled device tree in binary format. It describes the hardware that can be found on each specific RasPi model, and is produced from a device tree source file (.dts`) which is just text, formatted in a specific way. The kernel's live image of the device-tree can be seen in: /sys/firmware/devicetree/base Per Ciro Santilli, it can be displayed in .dts format by: sudo apt-get install device-tree-compiler dtc -I fs -O dts /sys/firmware/devicetree/baseYou can find the specification of the device tree file format here. The specification is intended to be OS-independent. You may also need the Device Tree Reference as clarification to some details. So, the answer to your original question is like this:the Berkeley Sockets API gets the network interface from the kernel the kernel gets the essential hardware information from the device tree file the device tree file is loaded by the GPU with /boot/start.elf according to configuration in /boot/config.txt the device tree file was originally created according to the hardware specifications of each RasPi model and compiled to the appropriate binary format.The device tree scanning code is mostly concerned about finding a valid driver for each piece of hardware. It won't much care about each device's purpose: that's the driver's job. The driver uses the appropriate *_register_driver() kernel function to document its own existence, takes the appropriate part of the device tree information to find the actual hardware, and then uses other functions to register that hardware as being under its control. Once the driver has initialized the hardware, it uses the kernel's register_netdev() function to register itself as a new network interface, which, among other things, will make the Sockets API (which is just another interface of the kernel, not an independent entity as such) aware that the network interface exists. The driver is likely to register itself for other things too: it will list a number of ethtool operations it supports for link status monitoring, traffic statistics and other low-level functions, and a driver for a wireless NIC will also use register_wiphy() to declare itself as a wireless network interface with specific Wi-Fi capabilities. The Linux TCP/IP stack has many interfaces: the Berkeley Sockets API is the side of it that will be the most familiar to application programmers. The netdev API is essentially the other, driver-facing side of the same coin.
(On raspberry pi zero w, kernel 4.14y) It seems the wireless adapter chip isn't a device in the /dev fs, but is the name of something that 'ifconfig' knows about. I understand that this is an artifact from Berkley Sockets. It is hardware, I assume it must be mentioned in the device tree -- to cause some driver to be loaded, but it must not create an entry in /dev (devfs). Where/how does Sockets find this device that is not a device?
How does Linux find/configure something like 'wlan0' when it is not a device that appears in /dev?
The solution was to update firmware of fingerprint device. I achieved by:Installing fwupdsudo pacman -S fwupdCheck if system can see device:fwupdmgr get-devicesRefresh firmware database:fwupdmgr refresh --forceUpdating my firmware:fwupdmgr updateYou have to reboot immediatlly to apply update and prevent your device from weird behavior.After all these steps fprintd-enroll will run without problems
I am trying to make work my finger print sensor thinkpad x390 yoga. I installed printfd package using yay. When I try to run fprintd-enroll, I get this error: Using device /net/reactivated/Fprint/Device/0 failed to claim device: GDBus.Error:net.reactivated.Fprint.Error.Internal: Open failed with error: The driver encountered a protocol error with the device.When I try to run it for second time I get this: Using device /net/reactivated/Fprint/Device/0 failed to claim device: GDBus.Error:net.reactivated.Fprint.Error.Internal: Open failed with error: Device 06cb:00bd is already openI tried installing thinkfinger package but stillno luck. How can I resolve this problem ? This is my lsusb output: Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 010: ID 06cb:00bd Synaptics, Inc. Prometheus MIS Touch Fingerprint Reader Bus 001 Device 008: ID 04f2:b67c Chicony Electronics Co., Ltd Integrated Camera Bus 001 Device 033: ID 2cb7:0210 Fibocom L830-EB-00 LTE WWAN Modem Bus 001 Device 005: ID 056a:51af Wacom Co., Ltd Pen and multitouch sensor Bus 001 Device 012: ID 8087:0aaa Intel Corp. Bluetooth 9460/9560 Jefferson Peak (JfP) Bus 001 Device 002: ID 058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubThanks for help
fprintd: The driver encountered a protocol error with the device
After being in a situation where this could be tested I've witnessed that you can't use a device tree compiled for kernel 3.10 on kernel 3.14. and vice versa.
I have a situation in which same device trees are used with different kernels. Can the device trees be build only once and used with all kernels? The reason I ask this is because I found the device tree compiler having a separate repository that the kernel. Also the explanation from this answer doesn't relate device tree compilation directly to a kernel version.
Is the device tree compiler tied to the kernel version?
The 'dtsi' file you are seeking is in the actual source directory. Not the boot mount. In this case right now the 'dtsi' file is 'compiled into' the 'dtb' files. They function like C header files and are "prepended" to 'dts' files which are then compiled into 'dtb'. In the case of arm, in the linux source,look under arch/arm/boot/dts/ and see what's in there! I hope that's helpful!
I am attempting to review my device tree to learn more about how the usb endpoints are defined, and also just to learn more about device trees. I am currently using a BeagleBone Black image, and I believe I am booting from the am335x-boneblack-uboot-univ.dtb device tree blob. Below you can see the output my device gives on bootup: U-Boot SPL 2019.04-00002-g07d5700e21 (Mar 06 2020 - 11:24:55 -0600) Trying to boot from MMC2 Loading Environment from EXT4... ** File not found /boot/uboot.env **** Unable to read "/boot/uboot.env" from mmc0:1 **U-Boot 2019.04-00002-g07d5700e21 (Mar 06 2020 - 11:24:55 -0600), Build: jenkins-github_Bootloader-Builder-137CPU : AM335X-GP rev 2.1 I2C: ready DRAM: 512 MiB No match for driver 'omap_hsmmc' No match for driver 'omap_hsmmc' Some drivers were not found Reset Source: Global warm SW reset has occurred. Reset Source: Power-on reset has occurred. RTC 32KCLK Source: External. MMC: OMAP SD/MMC: 0, OMAP SD/MMC: 1 Loading Environment from EXT4... ** File not found /boot/uboot.env **** Unable to read "/boot/uboot.env" from mmc0:1 ** Board: BeagleBone Black <ethaddr> not set. Validating first E-fuse MAC BeagleBone Black: BeagleBone: cape eeprom: i2c_probe: 0x54: BeagleBone: cape eeprom: i2c_probe: 0x55: BeagleBone: cape eeprom: i2c_probe: 0x56: BeagleBone: cape eeprom: i2c_probe: 0x57: Net: eth0: MII MODE Could not get PHY for cpsw: addr 0 cpsw, usb_ether Press SPACE to abort autoboot in 0 seconds board_name=[A335BNLT] ... board_rev=[] ... switch to partitions #0, OK mmc0 is current device SD/MMC found on device 0 switch to partitions #0, OK mmc0 is current device Scanning mmc 0:1... gpio: pin 56 (gpio 56) value is 0 gpio: pin 55 (gpio 55) value is 0 gpio: pin 54 (gpio 54) value is 0 gpio: pin 53 (gpio 53) value is 1 switch to partitions #0, OK mmc0 is current device gpio: pin 54 (gpio 54) value is 1 Checking for: /uEnv.txt ... Checking for: /boot.scr ... Checking for: /boot/boot.scr ... Checking for: /boot/uEnv.txt ... gpio: pin 55 (gpio 55) value is 1 2082 bytes read in 28 ms (72.3 KiB/s) Loaded environment from /boot/uEnv.txt Checking if uname_r is set in /boot/uEnv.txt... gpio: pin 56 (gpio 56) value is 1 Running uname_boot ... loading /boot/vmlinuz-4.19.94-ti-r42 ... 10095592 bytes read in 657 ms (14.7 MiB/s) debug: [enable_uboot_overlays=1] ... debug: [enable_uboot_cape_universal=1] ... debug: [uboot_base_dtb_univ=am335x-boneblack-uboot-univ.dtb] ... uboot_overlays: [uboot_base_dtb=am335x-boneblack-uboot-univ.dtb] ... uboot_overlays: Switching too: dtb=am335x-boneblack-uboot-univ.dtb ... loading /boot/dtbs/4.19.94-ti-r42/am335x-boneblack-uboot-univ.dtb ... 174145 bytes read in 53 ms (3.1 MiB/s) uboot_overlays: [fdt_buffer=0x60000] ... uboot_overlays: uboot loading of [/lib/firmware/BB-ADC-00A0.dtbo] disabled by /boot/uEnv.txt [disable_uboot_overlay_adc=1]... uboot_overlays: loading /lib/firmware/am335x-osd3358-mt-01.dtbo ... 5769 bytes read in 1078 ms (4.9 KiB/s) uboot_overlays: loading /lib/firmware/BB-BBBW-WL1835-00A0.dtbo ... 3536 bytes read in 1650 ms (2 KiB/s) uboot_overlays: loading /lib/firmware/BB-BONE-eMMC1-01-00A0.dtbo ... 1614 bytes read in 1351 ms (1000 Bytes/s) uboot_overlays: uboot loading of [/lib/firmware/BB-HDMI-TDA998x-00A0.dtbo] disabled by /boot/uEnv.txt [disable_uboot_overlay_video=1]... uboot_overlays: loading /lib/firmware/AM335X-PRU-RPROC-4-19-TI-00A0.dtbo ... 3653 bytes read in 1216 ms (2.9 KiB/s) loading /boot/initrd.img-4.19.94-ti-r42 ... 7051030 bytes read in 468 ms (14.4 MiB/s) debug: [console=ttyO0,115200n8 bone_capemgr.uboot_capemgr_enabled=1 root=/dev/mmcblk0p1 ro rootfstype=ext4 rootwait coherent_pool=1M net.ifnames=0 lpj=1990656 rng_core.default_quality=100 quiet] ... debug: [bootz 0x82000000 0x88080000:6b9716 88000000] ... ## Flattened Device Tree blob at 88000000 Booting using the fdt blob at 0x88000000 Loading Ramdisk to 8f946000, end 8ffff716 ... OK Loading Device Tree to 8f8b6000, end 8f945fff ... OKStarting kernel ...After reviewing the am335x-boneblack-uboot-univ.dts source file, I could see there were a few .dtsi include files as part of it. The .dts file can be seen below: /* * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/ * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. */ /dts-v1/;#include "am33xx.dtsi" #include "am335x-bone-common.dtsi" #include "am335x-bone-common-univ.dtsi"/ { model = "TI AM335x BeagleBone Black"; compatible = "ti,am335x-bone-black", "ti,am335x-bone", "ti,am33xx"; };&sgx { status = "okay"; };&cpu0_opp_table { /* * All PG 2.0 silicon may not support 1GHz but some of the early * BeagleBone Blacks have PG 2.0 silicon which is guaranteed * to support 1GHz OPP so enable it for PG 2.0 on this board. */ oppnitro-1000000000 { opp-supported-hw = <0x06 0x0100>; }; };&ldo3_reg { regulator-min-microvolt = <1800000>; regulator-max-microvolt = <1800000>; regulator-always-on; };&mmc1 { vmmc-supply = <&vmmcsd_fixed>; };I am wondering where I can find the .dtsi include files, and whether they will contain the remaining device tree information. Perhaps this isn't even the device tree I should be looking at. I am not sure as there are many of them in my /boot/dtbs/4.19.94-ti-r42 directory. In my research it seems the .dtsi files are stored in arch/arm/boot/dts but I am on an arm32 system and do not have an arch directory. I would appreciate any assistance as I learn more about editing device trees. For additional information, here is my uEnv.txt file: #Docs: http://elinux.org/Beagleboard:U-boot_partitioning_layout_2.0uname_r=4.19.94-ti-r42 #uuid= #dtb=###U-Boot Overlays### ###Documentation: http://elinux.org/Beagleboard:BeagleBoneBlack_Debian#U-Boot_Overlays ###Master Enable enable_uboot_overlays=1 ### ###Overide capes with eeprom uboot_overlay_addr0=/lib/firmware/am335x-osd3358-mt-01.dtbo uboot_overlay_addr1=/lib/firmware/BB-BBBW-WL1835-00A0.dtbo #uboot_overlay_addr2=/lib/firmware/<file2>.dtbo #uboot_overlay_addr3=/lib/firmware/<file3>.dtbo ### ###Additional custom capes #uboot_overlay_addr4=/lib/firmware/<file4>.dtbo #uboot_overlay_addr5=/lib/firmware/<file5>.dtbo #uboot_overlay_addr6=/lib/firmware/<file6>.dtbo #uboot_overlay_addr7=/lib/firmware/<file7>.dtbo ### ###Custom Cape #dtb_overlay=/lib/firmware/<file8>.dtbo ### ###Disable auto loading of virtual capes (emmc/video/wireless/adc) #disable_uboot_overlay_emmc=1 disable_uboot_overlay_video=1 disable_uboot_overlay_audio=1 #disable_uboot_overlay_wireless=1 disable_uboot_overlay_adc=1 ### ###PRUSS OPTIONS ###pru_rproc (4.14.x-ti kernel) #uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-14-TI-00A0.dtbo ###pru_rproc (4.19.x-ti kernel) uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-19-TI-00A0.dtbo ###pru_uio (4.14.x-ti, 4.19.x-ti & mainline/bone kernel) #uboot_overlay_pru=/lib/firmware/AM335X-PRU-UIO-00A0.dtbo ### ###Cape Universal Enable enable_uboot_cape_universal=1 ### ###Debug: disable uboot autoload of Cape #disable_uboot_overlay_addr0=1 #disable_uboot_overlay_addr1=1 #disable_uboot_overlay_addr2=1 #disable_uboot_overlay_addr3=1 ### ###U-Boot fdt tweaks... (60000 = 384KB) #uboot_fdt_buffer=0x60000 ###U-Boot Overlays###cmdline=coherent_pool=1M net.ifnames=0 lpj=1990656 rng_core.default_quality=100 quiet#In the event of edid real failures, uncomment this next line: #cmdline=coherent_pool=1M net.ifnames=0 lpj=1990656 rng_core.default_quality=100 quiet video=HDMI-A-1:1024x768@60e##enable Generic eMMC Flasher: ##make sure, these tools are installed: dosfstools rsync #cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.shAnd here are the contents of the /boot/dtbs/4.19.94-ti-r42 directory: debian@beaglebone:~$ ls /boot/dtbs/4.19.94-ti-r42/ am335x-abbbi.dtb am335x-baltos-ir2110.dtb am335x-baltos-ir3220.dtb am335x-baltos-ir5221.dtb am335x-base0033.dtb am335x-boneblack-audio.dtb am335x-boneblack-bbb-exp-c.dtb am335x-boneblack-bbb-exp-r.dtb am335x-boneblack-bbbmini.dtb am335x-boneblack.dtb am335x-boneblack-prusuart.dtb am335x-boneblack-roboticscape.dtb am335x-boneblack-uboot.dtb am335x-boneblack-uboot-univ.dtb am335x-boneblack-wireless.dtb am335x-boneblack-wireless-roboticscape.dtb am335x-boneblack-wl1835mod.dtb am335x-boneblue.dtb am335x-bone.dtb am335x-bonegreen.dtb am335x-bonegreen-gateway.dtb am335x-bonegreen-wireless.dtb am335x-bonegreen-wireless-uboot-univ.dtb am335x-bone-uboot-univ.dtb am335x-chiliboard.dtb am335x-cm-t335.dtb am335x-evm.dtb am335x-evmsk.dtb am335x-icev2.dtb am335x-icev2-prueth.dtb am335x-lxm.dtb am335x-moxa-uc-8100-me-t.dtb am335x-nano.dtb am335x-osd3358-sm-red.dtb am335x-pdu001.dtb am335x-pepper.dtb am335x-phycore-rdk.dtb am335x-pocketbeagle.dtb am335x-revolve.dtb am335x-sancloud-bbe.dtb am335x-sbc-t335.dtb am335x-shc.dtb am335x-sl50.dtb am335x-wega-rdk.dtb am437x-cm-t43.dtb am437x-gp-evm.dtb am437x-gp-evm-hdmi.dtb am437x-idk-evm.dtb am437x-sbc-t43.dtb am437x-sk-evm.dtb am43x-epos-evm.dtb am43x-epos-evm-hdmi.dtb am5729-beagleboneai.dtb am5729-beagleboneai-roboticscape.dtb am572x-idk.dtb am574x-idk.dtb am57xx-beagle-x15.dtb am57xx-beagle-x15-revb1.dtb am57xx-beagle-x15-revc.dtb am57xx-cl-som-am57x.dtb am57xx-sbc-am57x.dtb dra71-evm.dtb dra72-evm.dtb dra72-evm-revc.dtb dra76-evm.dtb dra7-evm.dtb omap5-cm-t54.dtb omap5-igep0050.dtb omap5-sbc-t54.dtb omap5-uevm.dtb
Where can I find the device tree source include (.dtsi) files?
cmdk@0,0:h is the driver instance for a disk. Per the Solaris documentation:The cmdk device driver is a common interface to various disk devices. The driver supports magnetic fixed disks and magnetic removable disks.
On Solaris I know disk are c0d0p0 or c0d0s0 for ide, c1t1d1s0 for scsi. I see there are links on pci devices,for example: ls -lh /dev/dsk/c0d0s7 lrwxrwxrwx 1 root root 50 ago 1 22:53 /dev/dsk/c0d0s7 -> ../../devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:hSomeone know what does it mean pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:h? I think pci0,0 is pci bus, pci-ide@1,1 is first controller. ide,ide@0 the master disk? And cmdk@'0,0h?
Reading the device tree for ide disks
A compile-time kernel configuration can specify whether or not include each of the standard drivers included in the kernel source tree, how those drivers will be included (as built-in or as loadable modules), and a number of other parameters related to e.g. what kind of optimizations and other choices will be used in compiling the kernel (e.g. optimize to specific CPU models, or to be as generic as possible, or whether or not enable some features like Spectre/Meltdown security mitigations by default or not). If a compile-time kernel configuration is set generic enough, the same kernel can be used with a large number of different systems within the same processor architecture. On the other hand, the device tree is about the actual hardware the kernel is currently running on. For embedded systems and other systems with no autoprobeable technologies like ACPI or PCI(e), the device tree will specify the exact I/O or memory addresses specific hardware components will be found at, so that the drivers will be able to find and use those hardware components. Even if the device tree describes a particular hardware component exists and how it can be accessed, if the necessary driver for it is disabled at compile-time, then the kernel will be unable to use it unless the driver module is added separately later. Or if the kernel is compiled with a completely monolithic configuration with no loadable module support, then that kernel won't be able to use a particular device at all unless the driver for it is included in the kernel compilation. If a driver for a hardware component is included in kernel configuration (either built-in or as a loadable module) but there is no information for it in the device tree (and the hardware architecture does not include standard detection mechanisms), then the kernel will be unaware of the existence of the hardware component. For example, if the device tree incorrectly specifies the display controller's framebuffer area as regular usable RAM, then you might get a random display of whatever bytes happen to get stored in the display controller's default framebuffer area, as a jumble of pixel "noise". Or, if the display controller needs a specific initialization to start working, you might get no output at all. In other words, the purpose of the device tree is to tell the kernel where the various hardware features of the system are located in the processor's address space(s), both to enable the kernel to point the correct drivers to correct physical hardware, and to tell the kernel which parts of address space it should not attempt to use as regular RAM.
I know KConfig serves to tune the C preprocessor, at the start of the Linux kernel compilation. And that the device tree is used to give a compiled kernel a description about hardware at runtime. How do these two configurability features overlap? Both give information about intrinsic CPU details and drivers for external peripherals, for example. I imagine that any peripheral mentionned in the device tree must have gotten its driver previously declared in the .config file. I can guess too that if a driver has been compiled as built-in, it will not be loaded as a module again... but what finer details are there?
How does kbuild compare to the device tree?
Solution found !Add the definition of the standard output in the device tree (in the chosen section) :linux,stdout-path = "serial0:115200n8"; where serial0 points to my serial interfaceUnset the virtual terminal console in the kernel configuration (in order to avoid a kind of precedence of this driver) :CONFIG_VT=y # CONFIG_VT_CONSOLE is not set@Rui F Ribeiro & @ Josh Benson, thanks for your support. Regards
I'm in the process of porting the embedded system for a picozed-based platform from the Xilinx-v2013.4 (3.12 kernel) to the Xilinx-v2016.2 (4.4 kernel). The former version still makes use of an initial RAM disk (initrd) while the new uses an initial RAM fs (initramfs). At boot time, the console is given through the serial interface on the USB connector. I expect it to be ttyPS0. At this point already, I don't know how this relation 'console-ttyPS0' is done !? Does it come from the device tree (I don't see any thing mentioning ttyPS0) ? In the former version (in the RAM disk), it was even not configured in the "init" scripts, neither in 'mdev' configuration file. The boot process is running and then it hangs. Here is the output: Starting kernel ...Uncompressing Linux... done, booting the kernel. Booting Linux on physical CPU 0x0 Linux version 4.4.0-test (pierrett@build0109-linux) (gcc version 4.8.3 20140320 (prerelease) (Sourcery CodeBench Lite 2014.05-23) ) #1 SMP PREEMPT Thu Aug 18 12:10:52 CEST 2016 CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), cr=18c5387d CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache Machine model: zynq bootconsole [earlycon0] enabled cma: Reserved 16 MiB at 0x3dc00000 Memory policy: Data cache writealloc PERCPU: Embedded 12 pages/cpu @ef7d2000 s18240 r8192 d22720 u49152 Built 1 zonelists in Zone order, mobility grouping on. Total pages: 260608 Kernel command line: bootargs=console=ttyPS0,115200 root=/dev/ram initrd=0x8000000 rw earlyprintk rootwait PID hash table entries: 4096 (order: 2, 16384 bytes) Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Memory: 1009532K/1048576K available (4456K kernel code, 213K rwdata, 1564K rodata, 240K init, 193K bss, 22660K reserved, 16384K cma-reserved, 238976K highmem) Virtual kernel memory layout: vector : 0xffff0000 - 0xffff1000 ( 4 kB) fixmap : 0xffc00000 - 0xfff00000 (3072 kB) vmalloc : 0xf0800000 - 0xff800000 ( 240 MB) lowmem : 0xc0000000 - 0xf0000000 ( 768 MB) pkmap : 0xbfe00000 - 0xc0000000 ( 2 MB) modules : 0xbf000000 - 0xbfe00000 ( 14 MB) .text : 0xc0008000 - 0xc05e949c (6022 kB) .init : 0xc05ea000 - 0xc0626000 ( 240 kB) .data : 0xc0626000 - 0xc065b450 ( 214 kB) .bss : 0xc065b450 - 0xc068bb54 ( 194 kB) Preemptible hierarchical RCU implementation. Build-time adjustment of leaf fanout to 32. RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=2. RCU: Adjusting geometry for rcu_fanout_leaf=32, nr_cpu_ids=2 NR_IRQS:16 nr_irqs:16 16 ps7-slcr mapped to f0802000 L2C: platform modifies aux control register: 0x72360000 -> 0x72760000 L2C: DT/platform modifies aux control register: 0x72360000 -> 0x72760000 L2C-310 erratum 769419 enabled L2C-310 enabling early BRESP for Cortex-A9 L2C-310 full line of zeros enabled for Cortex-A9 L2C-310 ID prefetch enabled, offset 1 lines L2C-310 dynamic clock gating enabled, standby mode enabled L2C-310 cache controller enabled, 8 ways, 512 kB L2C-310: CACHE_ID 0x410000c8, AUX_CTRL 0x76760001 zynq_clock_init: clkc starts at f0802100 Zynq clock init sched_clock: 64 bits at 333MHz, resolution 3ns, wraps every 4398046511103ns clocksource: arm_global_timer: mask: 0xffffffffffffffff max_cycles: 0x4ce07af025, max_idle_ns: 440795209040 ns clocksource: ttc_clocksource: mask: 0xffff max_cycles: 0xffff, max_idle_ns: 537538477 ns ps7-ttc #0 at f080a000, irq=18 Console: colour dummy device 80x30 console [tty0] enabled bootconsole [earlycon0] disabledMy feeling is that the trouble comes from a wrong setting of the console. In the boot log, one can notice that the "tty0" is enabled while in the boot arguments, I expect the console on ttyPS0. Could any one explain how the correct console could be set at startup? Additional info :the device tree serial config :ps7_uart_1: serial@e0001000 { clock-names = "ref_clk", "aper_clk"; clocks = <0x2 0x18 0x2 0x29>; compatible = "xlnx,xuartps"; current-speed = <115200>; device_type = "serial"; interrupt-parent = <&ps7_scugic_0>; interrupts = <0x0 0x32 0x4>; port-number = <0x0>; reg = <0xe0001000 0x1000>; xlnx,has-modem = <0x0>; };the boot arguments :console=ttyPS0,115200 root=/dev/ram initrd=0x8000000 rw earlyprintk rootwaitthe kernel serial config :# # Serial drivers # CONFIG_SERIAL_EARLYCON=y # CONFIG_SERIAL_8250 is not set # # Non-8250 serial port support # # CONFIG_SERIAL_AMBA_PL010 is not set # CONFIG_SERIAL_AMBA_PL011 is not set # CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST is not set # CONFIG_SERIAL_MAX3100 is not set # CONFIG_SERIAL_MAX310X is not set CONFIG_SERIAL_UARTLITE=m CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE_CONSOLE=y # CONFIG_SERIAL_JSM is not set # CONFIG_SERIAL_SCCNXP is not set # CONFIG_SERIAL_SC16IS7XX is not set # CONFIG_SERIAL_BCM63XX is not set # CONFIG_SERIAL_ALTERA_JTAGUART is not set # CONFIG_SERIAL_ALTERA_UART is not set # CONFIG_SERIAL_IFX6X60 is not set CONFIG_SERIAL_XILINX_PS_UART=y CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y # CONFIG_SERIAL_ARC is not set # CONFIG_SERIAL_RP2 is not set # CONFIG_SERIAL_FSL_LPUART is not set # CONFIG_SERIAL_CONEXANT_DIGICOLOR is not set # CONFIG_SERIAL_ST_ASC is not set # CONFIG_SERIAL_STM32 is not set # CONFIG_TTY_PRINTK is not set # CONFIG_HVC_DCC is not set # CONFIG_VIRTIO_CONSOLE is not set # CONFIG_IPMI_HANDLER is not set # CONFIG_HW_RANDOM is not set CONFIG_XILINX_DEVCFG=y # CONFIG_R3964 is not set # CONFIG_APPLICOM is not set # CONFIG_RAW_DRIVER is not set # CONFIG_TCG_TPM is not set CONFIG_DEVPORT=y # CONFIG_XILLYBUS is not setthe "inittab" entry: ttyPS0::respawn:/sbin/getty -L ttyPS0 115200 vt100 # GENERIC_SERIAL
Console setting in initramfs (ARM)
So one of the ways to fix this is to make use of a few additional environment variables. If we look in include/configs/ti_armv7_common.h we have: /* * We setup defaults based on constraints from the Linux kernel, which should * also be safe elsewhere. We have the default load at 32MB into DDR (for * the kernel), FDT above 128MB (the maximum location for the end of the * kernel), and the ramdisk 512KB above that (allowing for hopefully never * seen large trees). We say all of this must be within the first 256MB * as that will normally be within the kernel lowmem and thus visible via * bootm_size and we only run on platforms with 256MB or more of memory. */ #define DEFAULT_LINUX_BOOT_ENV \ "loadaddr=0x82000000\0" \ "kernel_addr_r=0x82000000\0" \ "fdtaddr=0x88000000\0" \ "fdt_addr_r=0x88000000\0" \ "rdaddr=0x88080000\0" \ "ramdisk_addr_r=0x88080000\0" \ "scriptaddr=0x80000000\0" \ "pxefile_addr_r=0x80100000\0" \ "bootm_size=0x10000000\0"So for the problem you're describing you would want to re-use bootm_size=0x10000000 to ensure that we keep the device tree within the first 256MB which is going to be kernel visible lowmem (with default kernel settings today at least, the size of kernel lowmem is configurable). Another equally useful solution here is to simply place the device tree and ramdisk into memory where you know they will be safe and use fdt_high=0xffffffff and initrd_high=0xffffffff to disable relocation. The main use of relocation is to make sure that things will be safe in the generic case (where U-Boot could be handed a random kernel and device tree and ramdisk and simply not know how big everything is). In a production case like this you can figure out some always safe and correct values, load there and not move them another time.
We run a customized version of U-Boot on an ARM-based embedded system and would like to load Linux 4.3 with a device tree blob. The system features 1GB of RAM of which the top 128MB are reserved for persistent storage. I use tftp to copy the kernel and the DTB into certain memory locations (kernel: 0x02000000, DTB: 0x02400000), and for now I’d like to ignore the initramfs. So I call bootm 0x2000000 - 0x2400000. What happens is that the DTB is relocated to the very end of the available U-Boot memory, to 0x37b60000 (virtual: 0xf7b60000). Linux fails to boot because it cannot access that address. It seems to be an issue about highmem/lowmem that I don’t understand, and lowmem ends at 760MB (virtual 0xef800000). Isn’t highmem supposed to be mapped dynamically when needed? (CONFIG_HIGHMEM is set.) What is the clean and proper way to solve this – cause U-Boot to use a lower location (how?) or change the Linux config to be able to access high memory (how?)? Note: using fdt_high=0xffffffff (and initrd_high=0xffffffff) Linux boots just fine as relocation is suppressed. U-Boot with debug information: DRAM: Monitor len: 00044358 Ram size: 40000000 Ram top: 40000000 Reserving 131072k for protected RAM at 38000000 TLB table from 37ff0000 to 37ff4000 Reserving 272k for U-Boot at: 37fab000 Reserving 4352k for malloc() at: 37b6b000 Reserving 80 Bytes for Board Info at: 37b6afb0 Reserving 160 Bytes for Global Data at: 37b6af10RAM Configuration: Bank #0: 00000000 1 GiBDRAM: 1 GiB New Stack Pointer is: 37b6aef0 Relocation Offset is: 33fab000 Relocating to 37fab000, new gd at 37b6af10, sp at 37b6aef0[…]* fdt: cmdline image address = 0x02400000 ## Checking for 'FDT'/'FDT Image' at 02400000 * fdt: raw FDT blob ## Flattened Device Tree blob at 02400000 Booting using the fdt blob at 0x2400000 of_flat_tree at 0x02400000 size 0x0000493c Loading Multi-File Image ... OK ## device tree at 02400000 ... 0240493b (len=31036 [0x793C]) Loading Device Tree to 37b60000, end 37b6793b ... OK
FDT relocated by U-Boot cannot be accessed by Linux (in highmem)
Okay, I found this page which guides me through the basic troubleshooting steps. This was what I was looking for. If you care for our problem itself: Obviously, muxing the pins as MMC (as described in our user's guide) is not sufficient, the bus needs to be declared as being SDIO in the device-tree. Now I can continue to find out how to enable SDIO detection for the beaglebone.
We are trying to get an SDIO-based 802.11 module to work on an SDIO port of the beaglebone. We adapted the device-tree overlay provided by manufacturer to our hardware, compiled the driver, the driver can even be loaded successfully and I see it with lsmod, but no interface shows up. Now I have a missing link in my understanding: How should the driver even know that there is a wifi adapter on SDIO3? The interface used isn't configured anywhere. Shouldn't the system scan the SDIO bus for a device and load the driver matching to the device being found? But dmesg|grep -i sdio doesn't even give a match ... Before closing this question as »too broad«: The question is not about how to fix this problem (that would indeed be too broad), but about how to debug or systematically narrow the cause. What are the steps to test to find out whether the problem is caused on the device-tree side, the kernel module, some glue in between?
How to debug an SDIO configuration problem?
I ask myself why there isn't a way to put the device tree, as the hardware description, together with the bootloader on some ROM chip and build the Linux OS independently from any hardware specs, at least within some defined limits. Answer: Cheapness. Nobody wants to pay for the ROM chip. The SoC has a boot ROM in it, but the device tree varies depending on the circuit the SoC is in, so that's no good. You'd need a separate "BIOS chip" like x86 boards have to make this work. You can sort of make it work by treating the SD card that most ARM boards boot from as the BIOS chip; just put U-Boot and the device tree on it, and have U-Boot load the kernel from a USB drive. Then the USB drive would be (fairly) portable from ARM board to ARM board. In terms of optimization, while you can compile for ARM generically, it really pays off to target a specific processor (much more so than on x86_64).
When I have some Linux distribution installed on a x64 system, for example I can pretty much unplug my storage drive put it into another x64 machine, install a few HL drivers, like the graphics driver and it will most likely run without any hassle. When it comes to ARM systems, especially ARM SoCs, like smartphones of any sort, there is a completely different picture. There is a different build of the same OS (for a example a OEM Android distro) for every singly smartphone. My question is: Why is that? I understand that unlike PCs with there standardized architecture, there are lots and lots of SoC chips and architectures. But with the device tree in mind I ask myself why there isn't a way to put the device tree, as the hardware description, together with the bootloader on some ROM chip and build the Linux OS independently from any hardware specs, at least within some defined limits.
Why are ARM SoCs so seemingly hard to handle with the Kernel?
Create an udev rule to match it. It shouldn't be necessary to run the script "after a module is loaded" – it deals with a specific device, so it would be better to run it "after the device is detected". Doesn't matter how the device was detected; as long as the kernel reports it as a 'new' device, it'll work. That said, modules have a presence in /sys just like devices, which means they too can trigger udev rules, and even have systemd .device units generated for them. For example, to trigger a service as soon as pwm-sun4i loads, use this udev rule: ACTION=="add", SUBSYSTEM=="module", KERNEL=="pwl-sun4i", \ TAG+="systemd", ENV{SYSTEMD_WANTS}+="fixup-pwl.service"To run a simple oneshot command: ACTION=="add", SUBSYSTEM=="module", KERNEL=="pwl-sun4i", RUN+="/etc/fix-pwl.sh"
Why do I want this? I use the pwm-ir-tx kernel module to blast IR signals from my embedded device. However, when the pwm kernel module is loaded during the boot process, the pin is on high. It takes about 10 seconds or so until I can set it to low with a lirc irsend signal. You can in principle 'overload' the IR-LED to make it brighter, if it is only used in PWM mode and not permanently on. The 10 seconds during the boot process undermine this strategy, however. What is my system doing so far (e.g. what's working, DT-Overlay file, etc)? I am using Armbian and modified the pwm-ir-tx driver in the mainline kernel, so that the state after a send is guaranteed low (setting duty cycle to 0, it was randomly 1 or 0 when just disabling the pwm channel on my device). I am using a device tree overlay that activates the pwm and the pwm-ir-tx. /dts-v1/; /plugin/;/ { compatible = "allwinner,sun4i-a10"; fragment@0 { target = <&pwm>; __overlay__ { pinctrl-names = "default"; pinctrl-0 = <&pwm0_pin>, <&pwm1_pin>; status = "okay"; }; }; fragment@1 { target-path = "/"; __overlay__ { pwm-ir-transmitter { compatible = "pwm-ir-tx"; pwms = <&pwm 0 0 0>; }; }; };};When I boot, the pwm-sun4i modules and pwm-ir-tx are loaded and a /dev/lircx character device is available to be used. To turn the LED off, I enabled a systemd service 'lircd-out' with the Unit entry 'After=lircd.service', that turns the led off, but it runs about 10 seconds after the boot process. Setting modules in the DT Overlay to "disabled" and loading them with modprobe afterwards is not working (not creating pwm or rc devices in sysfs, or a /dev/lircx character device). Maybe since those modules are built in (i.e. configured with 'Y', not 'M' in the .config file), but I must admit my understanding is still a bit fuzzy, here. What would be ideal? The ability to control the loading of the modules pwm-sun4i and pwm-ir-tx and thus be able to run a script after pwm-sun4i was loaded that sets the pwm pin to low and then load pwm-ir-tx. But as I mentioned, when I load those modules manually, they are somehow not accessible for the sysfs. Alternatively, I could pass a parameter in the fragment@0 to the pwm that sets it to low. But I do not know how and do not see anything in the code of pwm-sun4i.c that would allow this. I do not want to modify the kernel source to keep it compatible for updates. Any suggestions?
Run script after module is loaded due to device tree
Turns out what was missing was to in make nconfig also select:Kernel -> Build Device Tree with overlay supportAlso, in the file <buildroot>/board/raspberrypi3-64/genimage-raspberrypi3-64.cfg add the pps-gpio.dtbo file so that the image boot.vfat section looks like this: image boot.vfat { vfat { files = { "bcm2710-rpi-3-b.dtb", "bcm2710-rpi-3-b-plus.dtb", "bcm2837-rpi-3-b.dtb", "rpi-firmware/bootcode.bin", "rpi-firmware/cmdline.txt", "rpi-firmware/config.txt", "rpi-firmware/fixup.dat", "rpi-firmware/start.elf", "Image" } file overlays/pps-gpio.dtbo { image = "rpi-firmware/overlays/pps-gpio.dtbo" } } size = 32M }Putting the pps.conf file under etc/modules-load.d is not necessary. With these changes I get a /dev/pps0 device automatically when booting the system.
I using Buildroot trying to build a Linux image for the Raspberry Pi 3 in which I have access to pulse-per-second (PPS) inputs on one of the GPIO pins. First off, I have tried this with the standard Raspbian distribution and got it to work with the following changes:Add dtoverlay=pps-gpio,gpiopin=20 to /boot/config.txt. Add pps-gpio to /etc/modules.I then get an entry /dev/pps0 and when connecting a wire with PPS signal to physical pin 38 on the RPi3 and running pps-test /dev/pps0 I get the expected one signal per second. So far so good. Now I would like to recreate this with my own image built with Buildroot. I'm using the default configs/raspberrypi3_64_defconfig configuration, but with the following changes in make nconfig:Using kernel branch rpi-4.14.y-rt from github.com/raspberrypi/linux systemd as init system /dev management using udev (from systemd) Target packages -> Hardware handling -> pps-tools activated(I think that is all the changes I made, but I might have forgotten something..) In the sdcard.img which I get as output I see the file pps-gpio.dtbo in the boot partition. I add the line dtoverlay=pps-gpio,gpiopin=20 to the file config.txt. I also add the line pps-gpio to a file named pps.conf which I put in /etc/modules-load.d on the file system. When I boot the system I get no entry /dev/ppsX but when I run lsmod I get (among others):pps_gpio 16384 0 pps_core 20480 1 pps_gpioDoes this mean the dtoverlay has been correctly loaded? What can I try in order to get an entry in /dev/ppsX?
PPS GPIO with Buildroot image on Raspberry Pi 3
Updating to kernel 4.20 the error no longer appears. I can't determine exactly which kernel commit fixed it. There are a few that might have played a roll, but nothing I could identify as an exact fit.
I get the following warning on debian linux (kernel 4.18.8) bootup of my microchip sama5d3 board: mmc0: unrecognised SCR structure version 4 mmc0: error -22 whilst initialising SD cardafter spewing this about 30 times I get the following and the linux boot completes mmc0: host does not support reading read-only switch, assuming write-enable mmc0: new SDHC card at address 0007 mmcblk0: mmc0:0007 SD4GB 3.71 GiB I have tried several different SD cards and get the same result, with the only variation being the version number. I found the following online https://groups.google.com/forum/#!topic/beagleboard/A4zfNvyMmVI:SCR is a register defined by MMC/SD standard, and the data should be read by the data bus, instead of the cmd bus as most predefined registers do. The omap_hsmmc_request function of the TI HSMMC driver tries to read in the SCR data by DMA, and this always returns garbage. Sometimes the SCR check gets passed, because the garbage data happens to be a valid SCR dataThe linux kernel just checks to see if the SCR version is not 0, if so it produces the error. My dts file for mmc0 is as follows: mmc0: mmc@f0000000 { pinctrl-0 = <&pinctrl_mmc0_clk_cmd_dat0 &pinctrl_mmc0_dat1_3>; status = "okay"; slot@0 { reg = <0>; bus-width = <8>; }; };
mmc0 warning on SD card bootup of linux
Read section 2 in this: Specifying interrupt information for devices ...2) Interrupt controller nodes A device is marked as an interrupt controller with the "interrupt-controller" property. This is a empty, boolean property. An additional "#interrupt-cells" property defines the number of cells needed to specify a single interrupt. It is the responsibility of the interrupt controller's binding to define the length and format of the interrupt specifier. The following two variants are commonly used: a) one cellThe #interrupt-cells property is set to 1 and the single cell defines the index of the interrupt within the controller. Example: vic: intc@10140000 { compatible = "arm,versatile-vic"; interrupt-controller; #interrupt-cells = <1>; reg = <0x10140000 0x1000>; }; sic: intc@10003000 { compatible = "arm,versatile-sic"; interrupt-controller; #interrupt-cells = <1>; reg = <0x10003000 0x1000>; interrupt-parent = <&vic>; interrupts = <31>; /* Cascaded to vic */ };b) two cellsThe #interrupt-cells property is set to 2 and the first cell defines the index of the interrupt within the controller, while the second cell is used to specify any of the following flags:bits[3:0] trigger type and level flags 1 = low-to-high edge triggered 2 = high-to-low edge triggered 4 = active high level-sensitive 8 = active low level-sensitiveExample: i2c@7000c000 { gpioext: gpio-adnp@41 { compatible = "ad,gpio-adnp"; reg = <0x41>; interrupt-parent = <&gpio>; interrupts = <160 1>; gpio-controller; #gpio-cells = <1>; interrupt-controller; #interrupt-cells = <2>; nr-gpios = <64>; }; sx8634@2b { compatible = "smtc,sx8634"; reg = <0x2b>; interrupt-parent = <&gpioext>; interrupts = <3 0x8>; #address-cells = <1>; #size-cells = <0>; threshold = <0x40>; sensitivity = <7>; }; };So for the two cell variant, the first number is an index and the second is a bit mask defining the type of the interrupt input. This part of the device tree is handled by code in drivers/of/irq.c (e.g. of_irq_parse_one()). The two lines you refer to in the quoted example declare the device (gpio-exp@21) to be an interrupt controller and any other device that wants to use it must provide two cells per interrupt. Just above those lines is an example of a device specifying an interrupt in another interrupt controller (not this one, but the device with alias gpio), via the two properties interrupt-parent and interrupts (or you could use the new interrupts-extended which allows different interrupt controllers for each interrupt by specifying the parent as the first cell of the property).
I'm trying to setup a device tree source file for the first time on my custom platform. On the board is a NXP PCA9555 gpio expander. I'm attempting to setup node for the device and am a bit confused. Here is where I'm at with the node in the dts file: ioexp0: gpio-exp@21 { compatible = "nxp,pca9555"; reg = <21>; interrupt-parent = <&gpio>; interrupts = <8 0>; gpio-controller; #gpio-cells = <2>; /*I don't understand the following two lines*/ interrupt-controller; #interrupt-cells = <2>; };I got to this point by using the armada-388-gp.dts source as a guide. My confusion is on what code processes the #interrupt-cells property. The bindings documentation is not very helpful at all for this chip as it doesn't say anything regarding interrupt cell interpretation. Looking at the pca953x_irq_setup function in the source code for the pca9555 driver - I don't see anywhere that the #interrupt-cells property is handled. Is this handled in the linux interrupt handling code? I'm just confused as to how I'm suppose to know the meaning of the two interrupt cells. pca953x_irq_setup for your convenience: static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base) { struct i2c_client *client = chip->client; int ret, i; if (client->irq && irq_base != -1 && (chip->driver_data & PCA_INT)) { ret = pca953x_read_regs(chip, chip->regs->input, chip->irq_stat); if (ret) return ret; /* * There is no way to know which GPIO line generated the * interrupt. We have to rely on the previous read for * this purpose. */ for (i = 0; i < NBANK(chip); i++) chip->irq_stat[i] &= chip->reg_direction[i]; mutex_init(&chip->irq_lock); ret = devm_request_threaded_irq(&client->dev, client->irq, NULL, pca953x_irq_handler, IRQF_TRIGGER_LOW | IRQF_ONESHOT | IRQF_SHARED, dev_name(&client->dev), chip); if (ret) { dev_err(&client->dev, "failed to request irq %d\n", client->irq); return ret; } ret = gpiochip_irqchip_add_nested(&chip->gpio_chip, &pca953x_irq_chip, irq_base, handle_simple_irq, IRQ_TYPE_NONE); if (ret) { dev_err(&client->dev, "could not connect irqchip to gpiochip\n"); return ret; } gpiochip_set_nested_irqchip(&chip->gpio_chip, &pca953x_irq_chip, client->irq); } return 0; }This is my first time working with device tree so I'm hoping it's something obvious that I'm just missing. UPDATE: As a clarification - I am working with kernel version 4.12-rc4 at the moment. I now understand that I was misinterpreting some properties of the device tree. I was previously under the impression that the driver had to specify how all properties were handled. I now see that linux will actually handle many of the generic properties such as gpios or interrupts (which makes a lot of sense). Here is a bit more of a detailed explanation of how the translation from intspec to IRQ_TYPE* happens: The function of_irq_parse_one copies the interrupt specifier integers to a struct of_phandle_args here. This arg is then passed to irq_create_of_mapping via a consumer function (e.g. of_irq_get). This function then maps these args to a struct irq_fwspec via of_phandle_args_to_fwspec and passes it's fwspec data to irq_create_fwspec_mapping. These functions are all found in irqdomain.c. At this point the irq will belong to an irq_domain or use the irq_default_domain. As far I can tell - the pca853x driver uses the default domain. This domain is often setup by platform specific code. I found mine by searching for irq_domain_ops on cross reference. A lot of these seem to do simple copying of intspec[1] & IRQ_TYPE_SENSE_MASK to the type variable in irq_create_fwspec_mapping via irq_domain_translate. From here the type is set to the irq's irq_data via irqd_set_trigger_type.
Confusion regarding #interrupt-cells configuration on PCA9555 expander
User error. I found that SAM-BA also has a verification method: # sam-ba -p serial:ttyACM0:115200 -b sama5d2-xplained -a serialflash -c verify:at91-sama5d2_xplained_custom.dtb:0x70000 Opening serial port 'ttyACM0' Connection opened. Detected memory size is 4194304 bytes. Executing command 'verify:at91-sama5d2_xplained_custom.dtb:0x70000' Added 180 bytes of padding to align to page size Error: Command 'verify:at91-sama5d2_xplained_custom.dtb:0x70000': Failed verification. First error at file offset 0x00000000 Connection closed.Not good. It turns out that step 5 was missing a very essential step: first you need to erase the flash memory before writing it (I had no idea; I apparently have always used tools that took care of this for me): # sam-ba -p serial:ttyACM0:115200 -b sama5d2-xplained -a serialflash -c erase:0x70000:0x8000 Opening serial port 'ttyACM0' Connection opened. Detected memory size is 4194304 bytes. Executing command 'erase:0x70000:0x8000' Erased 32768 bytes at address 0x00070000 (100.00%) Connection closed.# sam-ba -p serial:ttyACM0:115200 -b sama5d2-xplained -a serialflash -c write:at91-sama5d2_xplained_custom.dtb:0x70000 Opening serial port 'ttyACM0' Connection opened. Detected memory size is 4194304 bytes. Executing command 'write:at91-sama5d2_xplained_custom.dtb:0x70000' Added 180 bytes of padding to align to page size Wrote 30976 bytes at address 0x00070000 (100.00%) Connection closed.# sam-ba -p serial:ttyACM0:115200 -b sama5d2-xplained -a serialflash -c verify:at91-sama5d2_xplained_custom.dtb:0x70000 Opening serial port 'ttyACM0' Connection opened. Detected memory size is 4194304 bytes. Executing command 'verify:at91-sama5d2_xplained_custom.dtb:0x70000' Added 180 bytes of padding to align to page size Verified 30976 bytes at address 0x00070000 (100.00%) Connection closed.
I have a problem: my device (an Atmel SAMA5D27 Xplained board) won't boot after my attempt to flash a new device tree. Here's what I did (details are at the end of this message):I downloaded the Linux4SAM source tree from Github (tag linux4sam_5.3). I used this tag since that is the one that was installed when I got the device. I made changes to the file arch/arm/boot/dts/at91-sama5d2_xplained.dts to enable the SPI1 device (and disable the SDMMC1 device since it is conflicting with the SPI1 pins). I saved the file in the same directory as at91-sama5d2_xplained_custom.dts and modified the Makefile accordingly. I compiled (from the root directory of the source tree) with the following command: $ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- dtbs. A new DTB was generated. I verified the size of the DTB file with the file command, which told be the DTB is 30796 bytes. I flashed the DTB to the devices with the SAM-BA utility, which was successful, but reported that 30976 bytes were flashed. I rebooted the device, which got stuck in the U-boot environment. This was to be expected, since the DTB got changed. I updated the bootcmd to reflect the new size of the DTB (30796 = 0x784c), but it still won't boot.Here's the result from printenv after my update (which I saved with saveenv): => printenv bootargs=console=ttyS0,115200 root=/dev/mmcblk0p1 rw rootfstype=ext4 rootwait bootcmd=sf probe 0; sf read 0x21000000 0x70000 0x784c; sf read 0x22000000 0x7c000 0x3636a8; bootz 0x22000000 - 0x21000000 bootdelay=1 ethact=gmac0 ethaddr=fc:c2:3d:02:f4:e9 stderr=serial stdin=serial stdout=serialEnvironment size: 309/8188 bytesNote that I only updated the size in the first sf read command of bootcmd. It used to read sf read 0x21000000 0x70000 0x77c8, which corresponds to the size of the DTB that I get from the original at91-sama5d2_xplained.dts file. Here's the error that I got: SF: Detected AT25DF321 with page size 256 Bytes, erase size 4 KiB, total 4 MiB device 0 offset 0x70000, size 0x784c SF: 30796 bytes @ 0x70000 Read: OK device 0 offset 0x7c000, size 0x3636a8 SF: 3552936 bytes @ 0x7c000 Read: OK Kernel image @ 0x22000000 [ 0x000000 - 0x3636a8 ] ERROR: Did not find a cmdline Flattened Device Tree Could not find a valid device treeNaturally, my question is: what did I do wrong? I have some hypotheses, which I tried:I got the size wrong: I tried to use 0x7900 in the bootcmd (corresponding to the 30976 bytes that SAM-BA reported), but this didn't help. I flashed to the wrong address: I'm not entirely sure what the address 0x21000000 is in sf read in bootcmd, but from the example files that were provided with SAM-BA I inferred that 0x70000 was correct. Changing 0x21000000 to 0x0 doesn't help. Changing the SAM-BA write command to write at 0x21070000 results in an error that it cannot write past the end of memory. My DTB is wrong. I don't think I'm doing particularly strange things in my DTS file, and since it compiles I assume it is at least in a format that should be readable.Any help/advice/pointers/etc. is very much appreciated, as I now have an unbootable device... I'm also worried that if I flashed to the wrong address, I broke all kinds of stuff that I'm not aware of.Details DTS file: /dts-v1/; #include "at91-sama5d2_xplained_common.dtsi"/ { model = "Atmel SAMA5D2 Xplained"; compatible = "atmel,sama5d2-xplained", "atmel,sama5d2", "atmel,sama5"; ahb { sdmmc1: sdio-host@b0000000 { status = "disabled"; /* conflict with spi1 */ }; apb { can0: can@f8054000 { status = "okay"; }; can1: can@fc050000 { status = "okay"; }; spi1: spi@fc000000 { pinctrl-names = "default"; pinctrl-0 = <&pinctrl_spi1_default>; status = "okay"; }; pinctrl@fc038000 { pinctrl_spi1_default: spi1_default { pinmux = <PIN_PA22__SPI1_SPCK>, <PIN_PA23__SPI1_MOSI>, <PIN_PA24__SPI1_MISO>, <PIN_PA25__SPI1_NPCS0>; bias-disable; }; }; }; }; };Compilation of the DTB: $ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- dtbs CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h make[1]: 'include/generated/mach-types.h' is up to date. CHK include/generated/bounds.h CHK include/generated/asm-offsets.h CALL scripts/checksyscalls.sh DTC arch/arm/boot/dts/at91-sama5d2_xplained_custom.dtbVerification of the DTB file size: $ file arch/arm/boot/dts/at91-sama5d2_xplained_custom.dtb arch/arm/boot/dts/at91-sama5d2_xplained_custom.dtb: Device Tree Blob version 17, size=30796, boot CPU=0, string block size=1692, DT structure block size=29048SAM-BA flash command output: # sam-ba -p serial:ttyACM0:115200 -b sama5d2-xplained -a serialflash -c write:at91-sama5d2_xplained_custom.dtb:0x70000 Opening serial port 'ttyACM0' Connection opened. Detected memory size is 4194304 bytes. Executing command 'write:at91-sama5d2_xplained_custom.dtb:0x70000' Added 180 bytes of padding to align to page size Wrote 30976 bytes at address 0x00070000 (100.00%) Connection closed.SAM-BA trying to write past the end of memory: # sam-ba -p serial:ttyACM0:115200 -b sama5d2-xplained -a serialflash -c write:at91-sama5d2_xplained_custom.dtb:0x21070000 Opening serial port 'ttyACM0' Connection opened. Detected memory size is 4194304 bytes. Executing command 'write:at91-sama5d2_xplained_custom.dtb:0x21070000' Added 180 bytes of padding to align to page size Error: Command 'write:at91-sama5d2_xplained_custom.dtb:0x21070000': Cannot write past end of memory, only -549912576 bytes remaining at offset 0x21070000 (requested 30976 bytes) Connection closed.
U-boot could not find a valid device tree
Thanks for the reply. This issue has been resolved. On the development board we are using a 50Mhz oscillator to provide a clock to the HPS peripherals, while a 25MHz signal is generated from that same 50Mhz clock which is then connected to the HPS clock pin on the fabric. On the new board, we have not used the 50Mhz oscillator but instead opted for a 25Mhz oscillator instead. This time a single track connects to the HPS clock pin on the fabric directly from the oscillator. We did not have any other clock to supply toward the HPS peripherals. Therefore, we routed the dedicated HPS clock internally toward the input pin on the peripherals of the HPS. This was a mistake as this cannot be done on the top level architecture of the Cyclone V as the dedicated HPS clock pin is not a GPIO.
I have been working on a Altera DE1-SoC Development Board for 8 months. The system I was working on includes a Cyclone V FPGA chip, particularly the 5CSEMA5F31C6N. It was running an embedded Linux operating system on chip. All was well and development was on-going. Two weeks ago, a new custom board was put together by a hardware engineer in the company. The design and components were mostly similar to that of the Development Board. All HPS related pins are wired the same way, with one main difference being that the default console port was UART1 instead. That issue has since been resolved and I now am able to receive the U-boot and Kernel messages through UART1. But the system did not completely boot. I have pinpointed this to multiple reasons. Firstly, I had an init.d script that would export the GPIO LEDs and create a sysfs file. Exporting the gpio pin works, however, changing the direction, or changing the value, or reading from it, causes the system to freeze. I disabled that function on the init.d script and rebooted the device. This time the boot failed on another init.d script line. This line was going to change the value of a register in the lightweight bridge. The command was devmem 0xFF200XXX 32 1, with XXX being the specific register. I tried using devmem on all bridges but all attempts would freeze Linux. I tried using devmem on the UART register of HPS, on the SDCard register of HPS (referenced here), and it does not freeze. I can verify that the bridge is enabled by reading the state sysfs file of each bridge: fpga_bridge state returns enabled I can also verify that the bridges are linked to the driver from this dmesg output: dmesg output I have enabled all three bridges in the hps configuration using Quartus Platform Designer. I also have the following lines in my u-boot.scr: fatload mmc 0:1 $fpgadata soc_system.rbf; fpga load 0 $fpgadata $filesize; setenv fdtimage soc_system.dtb; run bridge_enable_handoff; run mmcload; run mmcboot;I have also attempted enabling the bridges through the U-boot command line following these instructions. However, I am unable to write anything into $l3regs: writing into l3regs I am building the OS using Buildroot 2016.05 with a 4.4 Linux Kernel. To create the .rbf, .dts, .dtb, preloader-mkpimage.bin, and u-boot files, I am using SoC EDS 18.1 [Build 625]. I have run out of things to try. I would consider the issue solved if I am able to toggle an LED on and off from the Linux OS, using sysfs files. Assuming that the hardware is correct, what else could be the cause and how do I fix it?
Why does linux freezes when trying to access peripherals connected to the lightweight hps-to-fpga bridge (or any bridge)?
My problem was, that I used the mainline Kernel 6.1 (LTS) that does not support CONFIG_OF_CONFIGFS. So I downloaded a dtbo-configfs devicedriver from here: https://github.com/ikwzm/dtbocfg, compiled it and loaded it into the kernel. Then after mounting the configfs, I already had the device-tree directory available.
On my embedded System I enabled the CONFIG_CONFIGFS_FS=y to have access to the configFS. When booted, I mounted it with help of mount -t configfs none /sys/kernel/config. That works like charm: # mount | grep configfs configfs on /sys/kernel/config type configfs (rw,relatime)Now I try to create a folder device-tree, as I wanted to try out the dynamic loading of dtbo-files from userspace. Unfortunately I get an error: # mkdir -p /sys/kernel/config/device-tree/overlays/dummy mkdir: can't create directory '/sys/kernel/config/device-tree/': Operation not permittedI already made sure that CONFIG_OF_DYNAMIC and CONFIG_OF_OVERLAY are set. The permissions of /sys/kernel/config are: # ls -la /sys/kernel/config/ total 0 drwxr-xr-x 2 root root 0 May 31 16:57 . drwxr-xr-x 8 root root 0 May 31 15:56 ..So I'd have guessed, that writing to this directory as root should not be a problem at all. Any hints, how I could investigate that issue?
mkdir in configfs not permitted
The solution was to target the correct i2c bus the Accelerometer was connected to. This ended up being i2c2, not i2c0. This solved my issue. The correct dtbo file can be seen below: /* * MIRA custom cape device tree overlay * Supports MMA8451Q Accelerometer */ /dts-v1/; /plugin/;#include <dt-bindings/interrupt-controller/irq.h>/ { /* * Helper to show loaded overlays under: /proc/device-tree/chosen/overlays/ */ fragment@0 { target-path="/"; __overlay__ { chosen { overlays { MIRA_EXTENSIONS = __TIMESTAMP__; }; }; }; }; fragment@1 { target = <&i2c2>; __overlay__ { status = "okay"; #address-cells = <1>; #size-cells = <0>; accel@1C { compatible = "fsl,mma8451"; reg = <0x1C>; interrupt-parent = <&gpio1>; interrupts = <16 IRQ_TYPE_EDGE_RISING>; interrupt-names = "INT1"; }; }; }; };
I currently attempting to implement the mma8452 driver for my mma8451Q accelerometer by adding it to the the Linux Device Tree. Currently I am taking the route of creating a device tree overlay file (dtbo) that contains the addition to the device tree describing the accelerometer. It loads properly at boot and correctly pulls in the designated mma8452 driver. The driver however returns the following error in my dmesg log on boot: [ 23.2352] mma8452: probe of 0-001c failed with error -121 Does anyone know what this means or how to fix it? Perhaps an overlay is not the correct way to do this and instead I should create a dtsi file or modify the source dts file? I can access the accelerometer from the console using the i2c-tools package at the SA0 address 0x1C. The driver supplied by NXP can be found here mma8452.c Driver My dtbo file can be seen below: /* * MIRA custom cape device tree overlay * Supports MMA8451Q Accelerometer */ /dts-v1/; /plugin/;#include <dt-bindings/interrupt-controller/irq.h>/ { /* * Helper to show loaded overlays under: /proc/device-tree/chosen/overlays/ */ fragment@0 { target-path="/"; __overlay__ { chosen { overlays { MIRA_EXTENSIONS = __TIMESTAMP__; }; }; }; }; fragment@1 { target = <&i2c0>; __overlay__ { status = "okay"; #address-cells = <1>; #size-cells = <0>; accel@1C { compatible = "fsl,mma8451"; reg = <0x1C>; interrupt-parent = <&gpio1>; interrupts = <16 IRQ_TYPE_EDGE_RISING>; interrupt-names = "INT1"; }; }; }; };The target = <&i2c0> was chosen simply because it was an i2c node in another dtsi file. No other reason. May be wrong.
How to implement i2c device in Device Tree?
The problem was in this section &ehrpwm1 { pinctrl-name = "default"; pinctrl-0 = <&backlight_pin>; status = "okay"; };pinctrl-name should have been pinctrl-names (note the "s" on the end)
I have tried to get PWM working and am not having any success. I am using the TI Processor SDK with a modified version of the am335x-boneblack.dts device tree (see below) The PWM driver (ehrpwm1) probes correctly and appears in /sys/class/pwm/pwmchip0. Then, I configured the chip cd /sys/class/pwm/pwmchip0 echo 0 > export echo 1000000 > pwm0/period echo 250000 > pwm0/duty_cycle echo 1 > pwm0/enableHowever, there is no PWM output. Am I missing something obvious? Here is the device tree I have made: /dts-v1/;#include "am33xx.dtsi" #include "am335x-bone-common.dtsi"/ { model = "TI AM335x BeagleBone Black"; compatible = "ti,am335x-bone-black", "ti,am335x-bone", "ti,am33xx"; };&ldo3_reg { regulator-min-microvolt = <1800000>; regulator-max-microvolt = <1800000>; regulator-always-on; };&mmc1 { vmmc-supply = <&vmmcsd_fixed>; };&mmc2 { vmmc-supply = <&vmmcsd_fixed>; pinctrl-names = "default"; pinctrl-0 = <&emmc_pins>; bus-width = <8>; status = "okay"; };&am33xx_pinmux { lcd_pins: lcd_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x8a0, PIN_OUTPUT | MUX_MODE0) /* P9.45, lcd_data0 */ AM33XX_IOPAD(0x8a4, PIN_OUTPUT | MUX_MODE0) /* P9.46, lcd_data1 */ AM33XX_IOPAD(0x8a8, PIN_OUTPUT | MUX_MODE0) /* P9.43, lcd_data2 */ AM33XX_IOPAD(0x8ac, PIN_OUTPUT | MUX_MODE0) /* P9.44, lcd_data3 */ AM33XX_IOPAD(0x8b0, PIN_OUTPUT | MUX_MODE0) /* P9.41, lcd_data4 */ AM33XX_IOPAD(0x8b4, PIN_OUTPUT | MUX_MODE0) /* P9.42, lcd_data5 */ AM33XX_IOPAD(0x8b8, PIN_OUTPUT | MUX_MODE0) /* P9.39, lcd_data6 */ AM33XX_IOPAD(0x8bc, PIN_OUTPUT | MUX_MODE0) /* P9.40, lcd_data7 */ AM33XX_IOPAD(0x8c0, PIN_OUTPUT | MUX_MODE0) /* P9.37, lcd_data8 */ AM33XX_IOPAD(0x8c4, PIN_OUTPUT | MUX_MODE0) /* P9.38, lcd_data9 */ AM33XX_IOPAD(0x8c8, PIN_OUTPUT | MUX_MODE0) /* P9.36, lcd_data10 */ AM33XX_IOPAD(0x8cc, PIN_OUTPUT | MUX_MODE0) /* P9.34, lcd_data11 */ AM33XX_IOPAD(0x8d0, PIN_OUTPUT | MUX_MODE0) /* P9.35, lcd_data12 */ AM33XX_IOPAD(0x8d4, PIN_OUTPUT | MUX_MODE0) /* P9.33, lcd_data13 */ AM33XX_IOPAD(0x8d8, PIN_OUTPUT | MUX_MODE0) /* P9.31, lcd_data14 */ AM33XX_IOPAD(0x8dc, PIN_OUTPUT | MUX_MODE0) /* P9.32, lcd_data15 */ AM33XX_IOPAD(0x820, PIN_OUTPUT | MUX_MODE1) /* P9.19, lcd_data23 */ AM33XX_IOPAD(0x824, PIN_OUTPUT | MUX_MODE1) /* P9.13, lcd_data22 */ AM33XX_IOPAD(0x828, PIN_OUTPUT | MUX_MODE1) /* P9.14, lcd_data21 */ AM33XX_IOPAD(0x82c, PIN_OUTPUT | MUX_MODE1) /* P9.17, lcd_data20 */ AM33XX_IOPAD(0x830, PIN_OUTPUT | MUX_MODE1) /* P9.12, lcd_data19 */ AM33XX_IOPAD(0x834, PIN_OUTPUT | MUX_MODE1) /* P9.11, lcd_data18 */ AM33XX_IOPAD(0x838, PIN_OUTPUT | MUX_MODE1) /* P9.16, lcd_data17 */ AM33XX_IOPAD(0x83c, PIN_OUTPUT | MUX_MODE1) /* P9.15, lcd_data16 */ AM33XX_IOPAD(0x8e0, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* P9.27, lcd_vsync */ AM33XX_IOPAD(0x8e4, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* P9.29, lcd_hsync */ AM33XX_IOPAD(0x8e8, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* P9.28, lcd_pclk */ AM33XX_IOPAD(0x8ec, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* P9.30, lcd_ac_bias_en */ /* LCD enable */ AM33XX_IOPAD(0x88c, PIN_OUTPUT_PULLUP | MUX_MODE7) /* P8.19, gpio2[1] */ >; }; backlight_pin: backlight_pin { pinctrl-single,pins = < AM33XX_IOPAD(0x848, PIN_OUTPUT | MUX_MODE6) >; /* P9.14, gpio1[18] */ }; touchscreen_pins: touchscreen_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x9a4, PIN_INPUT_PULLUP | MUX_MODE7) >; /* P9.27, gpio3[19] */ }; dcan0_pins: dcan0_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x97c, PIN_INPUT_PULLUP | MUX_MODE2) /* P9.19, ddcan0_rx */ AM33XX_IOPAD(0x978, PIN_OUTPUT_PULLUP | MUX_MODE2) /* P9.20, ddcan0_tx */ >; }; uart1_pins: uart1_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x980, PIN_INPUT_PULLUP | MUX_MODE0) /* P9.26, uart1_rxd */ AM33XX_IOPAD(0x984, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* P9.24, uart1_txd */ >; }; uart2_pins: uart2_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x950, PIN_INPUT_PULLUP | MUX_MODE1) /* P9.22, uart2_rxd */ AM33XX_IOPAD(0x954, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* P9.21, uart2_txd */ >; }; uart4_pins: uart4_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x870, PIN_INPUT_PULLUP | MUX_MODE6) /* P9.11, uart4_rxd */ AM33XX_IOPAD(0x874, PIN_OUTPUT_PULLDOWN | MUX_MODE6) /* P9.13, uart4_txd */ >; }; uart5_pins: uart5_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x8c4, PIN_INPUT_PULLUP | MUX_MODE4) /* P8.38, uart5_rxd */ AM33XX_IOPAD(0x8c0, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* P8.37, uart5_txd */ >; }; ehrpwm1_pins: ehrpwm1_pins { pinctrl-single,pins = < AM33XX_IOPAD(0x848, PIN_OUTPUT | MUX_MODE6) /* P9.14, EHRPWM1A */ AM33XX_IOPAD(0x84c, PIN_OUTPUT | MUX_MODE6) /* P9.16, EHRPWM1B */ >; }; };&epwmss1 { status = "okay"; };&ehrpwm1 { pinctrl-name = "default"; pinctrl-0 = <&backlight_pin>; status = "okay"; };&lcdc { status = "okay"; blue-and-red-wiring = "crossed"; };&sgx { status = "okay"; };/ { lcd0: display { status = "okay"; compatible = "ti,tilcdc,panel"; label = "lcd"; pinctrl-names = "default"; pinctrl-0 = <&lcd_pins>; enable-gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>; /* P8.19 */ panel-info { ac-bias = <255>; ac-bias-intrpt = <0>; dma-burst-sz = <16>; bpp = <32>; fdd = <0x80>; sync-edge = <0>; sync-ctrl = <0>; raster-order = <0>; fifo-th = <0>; }; display-timings { native-mode = <&timing0>; timing0: 800x480 { clock-frequency = <45000000>; hactive = <800>; vactive = <480>; hfront-porch = <40>; hback-porch = <40>; hsync-len = <48>; vback-porch = <29>; vfront-porch = <13>; vsync-len = <3>; hsync-active = <0>; vsync-active = <0>; }; }; }; };&i2c2 { polytouch: edt-ft5x06@38 { compatible = "edt,edt-ft5406", "edt,edt-ft5x06"; reg = <0x38>; pinctrl-names = "default"; pinctrl-0 = <&touchscreen_pins>; interrupt-parent = <&gpio3>; interrupts = <19 0>; /* P9.27 */ touchscreen-size-x = <799>; touchscreen-size-y = <479>; xfuzz = <0>; yfuzz = <0>; }; };&rtc { system-power-controller; };&dcan0 { status = "okay"; pinctrl-name = "default"; pinctrl-0 = <&dcan0_pins>; };&uart1 { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&uart1_pins>; };&uart2 { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&uart2_pins>; };&uart4 { status = "okay"; pinctrl-names = "default"; pinctrl-0 = <&uart4_pins>; };
How do I configure the Beaglebone Black PWM correctly
Setting up a node in the device tree (dts) requires a compatible node like gpio-keys or gpio-leds. You can't just make up a node like I was trying to do. since the line I need is part of SPI BLE I added it to my spi1 node as follows: spi1: spi@f8008000 { cs-gpios = <0>, <0>, <0>, <0>; pinctrl-0 = <&pinctrl_spi1 &pinctrl_ble_irq>; dmas = <0>, <0>; status = "okay"; spidev@0 { compatible = "semtech,sx1301"; spi-max-frequency = <10000000>; reg = <0>; }; }; pinctrl@fffff200 { board { pinctrl_ble_irq: ble_irq { atmel,pins = <AT91_PIOB 14 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOB 20 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOB 22 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOB 26 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOC 17 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOD 6 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOD 15 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOE 16 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOE 23 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOE 31 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOD 8 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>; }; }; };I still don't know why the other pins won't pull down but at least now I am not gettin an error in my boot. I had to turn on earlyprintk in the kernel in order to see the message. Update: was finally able to get pulldown to work. Several pins were pulled up in hardware and thus the ineffectiveness of the pulldown. Several pins were set as LED or used by other peripherals that I disabled. All the pins in the above example did pull down successfully.
I would like to have the default for certain input pins be a weak pulldown. I am using a sama5d36 running Debian 4.12.8. I modified the dts file as follows: ahb { abp { pinctrl@fffff200 { board { pinctrl_inputs: input_pins { atmel,pins = <AT91_PIOC 26 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOC 27 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOA 30 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOA 31 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>; }; }; }; }; };myInputs { compatible = "atmel,at91sam9x5-pinctrl", "atmel,at91rm9200-pinctrl"; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_inputs>; };Just wanted to add that I do see PULL_DOWN in /sys/kernel/debug/pinctrl/ahb:apb:pinctrl@fffff200/pinconf-pins: pin 30 (pioA30): PULL_DOWN|DRIVE_STRENGTH_MED pin 31 (pioA31): PULL_DOWN|DRIVE_STRENGTH_MED pin 90 (pioC26): PULL_DOWN|DRIVE_STRENGTH_MED pin 91 (pioC27): PULL_DOWN|DRIVE_STRENGTH_MEDbut /sys/class/gpio/pioA30 still shows a value of 1: direction -> in active_low -> 0 value -> 1Same for the other pins (PioA31, pioC26, pioC27). I don’t need this pin to be active low I just added that to show that the input is high with nothing connected, something I verified with a scope. Update: I added the following pins and they actually work: <AT91_PIOD 6 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>, <AT91_PIOD 7 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_DOWN>;which confuses me even more. I checked /sys/kernel/debug/pinctrl/ahb:apb:pinctrl@fffff200/pinmux-pins and all the pins show as follows: pin 102 (pioD6): (MUX UNCLAIMED) (GPIO UNCLAIMED) pin 103 (pioD7): (MUX UNCLAIMED) (GPIO UNCLAIMED)Anyone experienced anything similar?
Want pulldown on gpio pin
For a permanent solution to change the label of the container, use: sudo cryptsetup config /dev/sdb1 --label YOURLABELEdit: Notice that labeling only works with Luks2 headers. In any case, it is possible to convert a Luks1 header into Luks2 with: sudo cryptsetup convert /dev/sdb1 --type luks2OBS: Please notice that Luks2 header occupy more space, which can reduce the total number of key slots. Converting Luks2 back to Luks1 is also possible, but there are reports of people who have had problems or difficulties in converting back.
I just received a new USB flash drive, and set up 2 encrypted partitions on it. I used dm-crypt (LUKS mode) through cryptsetup. With an additional non-encrypted partition, the drive has the following structure:/dev/sdb1, encrypted, hiding an ext4 filesystem labelled "Partition 1". /dev/sdb2, encrypted, hiding another ext4 filesystem, labelled "Partition 2". /dev/sdb3, clear, visible ext4 filesystem labelled "Partition 3".Because the labels are attached to the ext4 filesystems, the first two remain completely invisible as long as the partitions haven't been decrypted. This means that, in the meantime, the LUKS containers have no labels. This is particularly annoying when using GNOME (automount), in which case the partitions appear as "x GB Encrypted" and "y GB Encrypted" until I decide to unlock them. This isn't really a blocking problem, but it's quite annoying, since I really like my labels and would love to see them appear even when my partitions are still encrypted. Therefore, is there a way to attach labels to dm-crypt+LUKS containers, just like we attach labels to ext4 filesystems? Does the dm-crypt+LUKS header have some room for that, and if so, how may I set a label? Note that I don't want to expose my ext4 labels before decryption, that would be silly. I'd like to add other labels to the containers, which could appear while the ext4 labels are hidden.
How can I set a label on a dm-crypt+LUKS container?
The answer (as I now know): concurrency. In short: My sequential write, either using dd or when copying a file (like... in daily use), becomes a pseudo-random write (bad) because four threads are working concurrently on writing the encrypted data to the block device after concurrent encryption (good). Mitigation (for "older" kernels) The negative effect can be mitigated by increasing the amount of queued requests in the IO scheduler queue like this: echo 4096 | sudo tee /sys/block/sdc/queue/nr_requestsIn my case this nearly triples (~56MB/s) the throughput for the 4GB random data test explained in my question. Of course, the performance still falls short 100MB/s compared to unencrypted IO. Investigation Multicore blktrace I further investigated the problematic scenario in which a btrfs is placed on a top of a LUKS encrypted block device. To show me what write instructions are issued to the actual block device, I used blktrace like this: sudo blktrace -a write -d /dev/sdc -o - | blkparse -b 1 -i - | grep -w DWhat this does is (as far as I was able to comprehend) trace IO request to /dev/sdc which are of type "write", then parse this to human readable output but further restrict the output to action "D", which is (according to man blkparse) "IO issued to driver". The result was something like this (see about 5000 lines of output of the multicore log): 8,32 0 32732 127.148240056 3 D W 38036976 + 240 [ksoftirqd/0] 8,32 0 32734 127.149958221 3 D W 38038176 + 240 [ksoftirqd/0] 8,32 0 32736 127.160257521 3 D W 38038416 + 240 [ksoftirqd/0] 8,32 1 30264 127.186905632 13 D W 35712032 + 240 [ksoftirqd/1] 8,32 1 30266 127.196561599 13 D W 35712272 + 240 [ksoftirqd/1] 8,32 1 30268 127.209431760 13 D W 35713872 + 240 [ksoftirqd/1]Column 1: major,minor of the block device Column 2: CPU ID Column 3: sequence number Column 4: time stamp Column 5: process ID Column 6: action Column 7: RWBS data (type, sector, length)This is a snipped of the output produced while dd'ing the 4GB random data onto the mounted filesystem. It is clear that at least two processes are involved. The remaining log shows that all four processors are actually working on it. Sadly, the write requests are not ordered anymore. While CPU0 is writing somewhere around the 38038416th sector, CPU1, which is scheduled afterwards, is writing somewhere around the 35713872nd sector. That's bad. Singlecore blktrace I did the same experiment after disabling multi-threading and disabling the second core of my CPU. Of course, only one processor is involved in writing to the stick. But more importantly, the write request are properly sequential, which is why the full write performance of ~170MB/s is achieved in the otherwise same setup. Have a look at about 5000 lines of output in the singlecore log. Discussion Now that I know the cause and the proper google search terms, the information about this problem is bubbling up to the surface. As it turns out, I am not the first one to notice.Four years ago, a patch brought multi-threaded dm-crypt to the kernel. That commit pretty much matches my findings exactly. Two years ago, patches were discussed improving dm-crypt performance, including re-ordering of write requests. One year ago, the topic was still discussed. Recently, a patch enabling sorting for dm-crypt was finally commited to the kernel. There is an interesting email with performance tests (which I did not read very much of) concerning this phenomenon.Fixed in current kernels (>=4.0.2) Because I (later) found the kernel commit obviously targeted at this exact problem, I wanted to try an updated kernel. [After compiling it myself and then finding out it's already in debian/sid] It turns out that the problem is indeed fixed. I don't know the exact kernel release in which the fix appeared, but the original commit will give clues to anyone interested. For the record: $ uname -a Linux t440p 4.0.0-1-amd64 #1 SMP Debian 4.0.2-1 (2015-05-11) x86_64 GNU/Linux $ dd if=/home/schlimmchen/Documents/random of=/mnt/dd-test bs=1M conv=fsync 4294967296 bytes (4.3 GB) copied, 29.7559 s, 144 MB/sA hat tip to Mikulas Patocka, who authored the commit.
I am investigating a problem where encrypting a block device imposes a huge performance penalty when writing to it. Hours of Internet reading and experiments did not provide me with a proper understanding, let alone a solution. The question in short: Why do I get perfectly fast write speeds when putting a btrfs onto a block device (~170MB/s), while the write speed plummets (~20MB/s) when putting a dm-crypt/LUKS in between the file system and the block device, although the system is more than capable of sustaining a sufficiently high encryption throughput? Scenario /home/schlimmchen/random is a 4.0GB file filled with data from /dev/urandom earlier. dd if=/dev/urandom of=/home/schlimmchen/Documents/random bs=1M count=4096Reading it is super fast: $ dd if=/home/schlimmchen/Documents/random of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 6.58036 s, 648 MB/s $ dd if=/home/schlimmchen/Documents/random of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 0.786102 s, 5.4 GB/s(the second time, the file was obviously read from cache). Unencrypted btrfs The device is directly formatted with btrfs (no partition table on the block device). $ sudo mkfs.btrfs /dev/sdf $ sudo mount /dev/sdf /mnt $ sudo chmod 777 /mntWrite speed gets as high as ~170MB/s: $ dd if=/home/schlimmchen/Documents/random of=/mnt/dd-test1 bs=1M conv=fsync 4265841146 bytes (4.3 GB) copied, 27.1564 s, 157 MB/s $ dd if=/home/schlimmchen/Documents/random of=/mnt/dd-test2 bs=1M conv=fsync 4265841146 bytes (4.3 GB) copied, 25.1882 s, 169 MB/s $ dd if=/home/schlimmchen/Documents/random of=/mnt/dd-test3 bs=1M conv=fsync 4265841146 bytes (4.3 GB) copied, 29.8419 s, 143 MB/sRead speed is well above 200MB/s. $ dd if=/mnt/dd-test1 of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 19.8265 s, 215 MB/s $ dd if=/mnt/dd-test2 of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 19.9821 s, 213 MB/s $ dd if=/mnt/dd-test3 of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 19.8561 s, 215 MB/sEncrypted btrfs on block device The device is formatted with LUKS, and the resultant device is formatted with btrfs: $ sudo cryptsetup luksFormat /dev/sdf $ sudo cryptsetup luksOpen /dev/sdf crypt $ sudo mkfs.btrfs /dev/mapper/crypt $ sudo mount /dev/mapper/crypt /mnt $ sudo chmod 777 /mnt $ dd if=/home/schlimmchen/Documents/random of=/mnt/dd-test1 bs=1M conv=fsync 4265841146 bytes (4.3 GB) copied, 210.42 s, 20.3 MB/s $ dd if=/home/schlimmchen/Documents/random of=/mnt/dd-test2 bs=1M 4265841146 bytes (4.3 GB) copied, 207.402 s, 20.6 MB/sRead speed suffers only marginally (why does it at all?): $ dd if=/mnt/dd-test1 of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 22.2002 s, 192 MB/s $ dd if=/mnt/dd-test2 of=/dev/null bs=1M 4265841146 bytes (4.3 GB) copied, 22.0794 s, 193 MB/sluksDump: http://pastebin.com/i9VYRR0p Encrypted btrfs in file on btrfs on block device The write speed "skyrockets" to over 150MB/s when writing into an encrypted file. I put a btrfs onto the block device, allocated a 16GB file, which I lukfsFormat'ed and mounted. $ sudo mkfs.btrfs /dev/sdf -f $ sudo mount /dev/sdf /mnt $ sudo chmod 777 /mnt $ dd if=/dev/zero of=/mnt/crypted-file bs=1M count=16384 conv=fsync 17179869184 bytes (17 GB) copied, 100.534 s, 171 MB/s $ sudo cryptsetup luksFormat /mnt/crypted-file $ sudo cryptsetup luksOpen /mnt/crypted-file crypt $ sudo mkfs.btrfs /dev/mapper/crypt $ sudo mount /dev/mapper/crypt /tmp/nested/ $ dd if=/home/schlimmchen/Documents/random of=/tmp/nested/dd-test1 bs=1M conv=fsync 4265841146 bytes (4.3 GB) copied, 26.4524 s, 161 MB/s $ dd if=/home/schlimmchen/Documents/random of=/tmp/nested/dd-test2 bs=1M conv=fsync 4265841146 bytes (4.3 GB) copied, 27.5601 s, 155 MB/sWhy is the write performance increasing like this? What does this particular nesting of filesystems and block devices achieve to aid in high write speeds? Setup The problem is reproducible on two systems running the same distro and kernel. However, I also observed the low write speeds with kernel 3.19.0 on System2.Device: SanDisk Extreme 64GB USB3.0 USB Stick System1: Intel NUC 5i5RYH, i5-5250U (Broadwell), 8GB RAM, Samsung 840 EVO 250GB SSD System2: Lenovo T440p, i5-4300M (Haswell), 16GB RAM, Samsung 850 PRO 256GB SSD Distro/Kernel: Debian Jessie, 3.16.7 cryptsetup: 1.6.6 /proc/crypto for System1: http://pastebin.com/QUSGMfiS cryptsetup benchmark for System1: http://pastebin.com/4RxzPFeT btrfs(-tools) is version 3.17 lsblk -t /dev/sdf: http://pastebin.com/nv49tYWcThoughtsAlignment is not the cause as far as I can see. Even if the stick's page size is 16KiB, the cryptsetup payload start is aligned to 2MiB anyway. --allow-discards (for cryptsetup's luksOpen) did not help, as I was expecting. While doing a lot less experiments with it, I observed very similar behavior with an external hard drive, connected through a USB3.0 adapter. It seems to me that the system is writing 64KiB blocks. A systemtrap script I tried indicates that at least. /sys/block/sdf/stat backs this hypothesis up since a lot of writes are merged. So my guess is that writing in too small blocks is not the cause. No luck with changing the block device queue scheduler to NOOP. Putting the crypt into an LVM volume did not help.
Abysmal general dm-crypt (LUKS) write performance
I suggest using a different testing method. hdparm is a bit weird as it gives device addresses rather than filesystem addresses, and it doesn't say which device those addresses relate to (e.g. it resolves partitions, but not devicemapper targets, etc.). Much easier to use something that sticks with filesystem addresses, that way it's consistent (maybe except for non-traditional filesystems like zfs/btrfs). Create a test file: (not random on purpose) # yes | dd iflag=fullblock bs=1M count=1 of=trim.test Get the address, length and blocksize: (exact command depends on filefrag version) # filefrag -s -v trim.test File size of trim.test is 1048576 (256 blocks, blocksize 4096) ext logical physical expected length flags 0 0 34048 256 eof trim.test: 1 extent foundGet the device and mountpoint: # df trim.test /dev/mapper/something 32896880 11722824 20838512 37% /mount/pointWith this set up, you have a file trim.test filled with yes-pattern on /dev/mapper/something at address 34048 with length of 256 blocks of 4096 bytes. Reading that from the device directly should produce the yes-pattern: # dd bs=4096 skip=34048 count=256 if=/dev/mapper/something | hexdump -C 00000000 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a |y.y.y.y.y.y.y.y.| * 00100000If TRIM is enabled, this pattern should change when you delete the file. Note that caches need to be dropped also, otherwise dd will not re-read the data from disk. # rm trim.test # sync # fstrim -v /mount/point/ # when not using 'discard' mount option # echo 1 > /proc/sys/vm/drop_caches # dd bs=4096 skip=34048 count=256 if=/dev/mapper/something | hexdump -COn most SSD that would result in a zero pattern: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00100000If encryption is involved, you will see a random pattern instead: 00000000 1f c9 55 7d 07 15 00 d1 4a 1c 41 1a 43 84 15 c0 |..U}....J.A.C...| 00000010 24 35 37 fe 05 f7 43 93 1e f4 3c cc d8 83 44 ad |$57...C...<...D.| 00000020 46 80 c2 26 13 06 dc 20 7e 22 e4 94 21 7c 8b 2c |F..&... ~"..!|.,|That's because physically trimmed, the crypto layer reads zeroes and decrypts those zeroes to "random" data. If the yes-pattern persists, most likely no trimming has been done.
I tried to setup TRIM with LVM and dm-crypt on ubuntu 13.04 following this tutorial: http://blog.neutrino.es/2013/howto-properly-activate-trim-for-your-ssd-on-linux-fstrim-lvm-and-dmcrypt/ See the notes about my configuration and my testing procedure below. QuestionsIs there a reliable test if TRIM works properly? Is my test routine wrong or is my TRIM not working? If it's not working: what is wrong with my setup? How can I debug TRIM for my setup and make TRIM work?Configuration Here ist my configuration: cat /etc/crypttab sda3_crypt UUID=[...] none luks,discardand cat /etc/lvm/lvm.conf # [...] devices { # [ ... ] issue_discards = 1 # [ ... ] } # [...]The SSD is a Samsung 840 Pro. Here is my test-procedure To test the setup I just did sudo fstrim -v / which resulted in /: [...] bytes were trimmed Doing this again resulted in /: 0 bytes were trimmed which seems to make sense and indicated that TRIM seems to work. However then I did this test: dd if=/dev/urandom of=tempfile count=100 bs=512k oflag=direct sudo hdparm --fibmap tempfile tempfile: filesystem blocksize 4096, begins at LBA 0; assuming 512 byte sectors. byte_offset begin_LBA end_LBA sectors 0 5520384 5521407 1024 524288 5528576 5529599 1024 1048576 5523456 5525503 2048 2097152 5607424 5619711 12288 8388608 5570560 5603327 32768 25165824 5963776 5980159 16384 33554432 6012928 6029311 16384 41943040 6275072 6291455 16384 50331648 6635520 6639615 4096sync sudo hdparm --read-sector 5520384 /dev/sda /dev/sda: reading sector 5520384: succeeded 7746 4e11 bf42 0c93 25d3 2825 19fd 8eda bd93 8ec6 9942 bb98 ed55 87eb 53e1 01d5 c61a 3f52 19a1 0ae5 0798 c6e2 39d9 771a b89f 3fc5 e786 9b1d 3452 d5d7 9479 a80d 114a 7528 a79f f475 57dc aeaf 25f4 998c 3dd5 b44d 23bf 77f3 0ad9 8688 6518 28ee 81db 1473 08b5 befe 8f2e 5b86 c84e c7d2 1bdd 1065 6a23 fd0f 2951 d879 e823 021b fa84 b9c1 eadd 9154 c9f4 2ebe cd70 64ec 75a8 4d93 c8fa 3174 7277 1ffb e858 5eca 7586 8b2e 9dbc ab12 40ab eb17 8187 e67d 5e0d 0005 5867 b924 5cfd 6723 9e4a 6f5f 99a4 a3b0 eeac 454a 83b6 c528 1106 6682 ca77 4edf 2180 bf0c b175 fabb 3d4b 37e2 b834 9e3e 82f2 2fdd 2c6a c6ca 873f e71e f979 160f 5778 356f 2aea 6176 46b6 72b9 f76e ee51 979c 326b 1436 7cfe f677 bfcd 4c3c 9e11 4747 45c1 4bb2 4137 03a1 e4c8 e9dd 43b4 a3b4 ce1b d218 4161 bf64 727b 75d8 dcc2 e14c ebec 2126 25da 0300 12bd 6b1a 28b3 824f 3911 c960 527d 97cd de1b 9f08 9a8e dcdc e65f 1875 58ca be65 82bf e844 50b8 cc1b 7466 58b8 e708 bd3d c01f 64fb 9317 a77a e43b 671f e1fb e328 93a9 c9c7 291c 56e0 c6c1 f011 b94d 9dc7 71e6 c8b1 5720 b8c9 b1a6 14f1 7299 9122 912b 312a 0f2f a31a 8bf9 9f8c 54e6 96f3 60b8 04a7 7dc9 3caa db0a a837 e5d7 2752 b477 c22d 7598 44e1 84e9 25d4 5db5 9f19 f73b 85a0 c656 373a ec34 55fb e1fc 124e 4674 1ba8 1a84 6aa4 7cb5 455e f416 adc6 a125 c4d4 8323 4eee 2493 2920 4e38 524c 1981sudo rm tempfile sync sudo fstrim / sync sudo hdparm --read-sector 5520384 /dev/sda/dev/sda: reading sector 5520384: succeeded 7746 4e11 bf42 0c93 25d3 2825 19fd 8eda bd93 8ec6 9942 bb98 ed55 87eb 53e1 01d5 c61a 3f52 19a1 0ae5 0798 c6e2 39d9 771a b89f 3fc5 e786 9b1d 3452 d5d7 9479 a80d 114a 7528 a79f f475 57dc aeaf 25f4 998c 3dd5 b44d 23bf 77f3 0ad9 8688 6518 28ee 81db 1473 08b5 befe 8f2e 5b86 c84e c7d2 1bdd 1065 6a23 fd0f 2951 d879 e823 021b fa84 b9c1 eadd 9154 c9f4 2ebe cd70 64ec 75a8 4d93 c8fa 3174 7277 1ffb e858 5eca 7586 8b2e 9dbc ab12 40ab eb17 8187 e67d 5e0d 0005 5867 b924 5cfd 6723 9e4a 6f5f 99a4 a3b0 eeac 454a 83b6 c528 1106 6682 ca77 4edf 2180 bf0c b175 fabb 3d4b 37e2 b834 9e3e 82f2 2fdd 2c6a c6ca 873f e71e f979 160f 5778 356f 2aea 6176 46b6 72b9 f76e ee51 979c 326b 1436 7cfe f677 bfcd 4c3c 9e11 4747 45c1 4bb2 4137 03a1 e4c8 e9dd 43b4 a3b4 ce1b d218 4161 bf64 727b 75d8 dcc2 e14c ebec 2126 25da 0300 12bd 6b1a 28b3 824f 3911 c960 527d 97cd de1b 9f08 9a8e dcdc e65f 1875 58ca be65 82bf e844 50b8 cc1b 7466 58b8 e708 bd3d c01f 64fb 9317 a77a e43b 671f e1fb e328 93a9 c9c7 291c 56e0 c6c1 f011 b94d 9dc7 71e6 c8b1 5720 b8c9 b1a6 14f1 7299 9122 912b 312a 0f2f a31a 8bf9 9f8c 54e6 96f3 60b8 04a7 7dc9 3caa db0a a837 e5d7 2752 b477 c22d 7598 44e1 84e9 25d4 5db5 9f19 f73b 85a0 c656 373a ec34 55fb e1fc 124e 4674 1ba8 1a84 6aa4 7cb5 455e f416 adc6 a125 c4d4 8323 4eee 2493 2920 4e38 524c 1981This seems to indicate that TRIM doesn't work. Since sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIMEdit Here is the output of sudo dmsetup table lubuntu--vg-root: 0 465903616 linear 252:0 2048 lubuntu--vg-swap_1: 0 33308672 linear 252:0 465905664 sda3_crypt: 0 499222528 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 8:3 4096 1 allow_discardsHere is my /etc/fstab: # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/lubuntu--vg-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda2 during installation UUID=f700d855-96d0-495e-a480-81f52b965bda /boot ext2 defaults 0 2 # /boot/efi was on /dev/sda1 during installation UUID=2296-2E49 /boot/efi vfat defaults 0 1 /dev/mapper/lubuntu--vg-swap_1 none swap sw 0 0 # tmp tmpfs /tmp tmpfs nodev,nosuid,noexec,mode=1777 0 0 Edit: I finally reported it as a bug in https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1213631 Hope somebody will find a solution there or at least test the setup and verify the bug. Update Now it works, see accepted answer.
Trim with LVM and dm-crypt
"sda5_crypt" crypttab change as per suggestion below: Replace OLD_NAME with NEW_NAME in /etc/crypttab & /etc/fstab, and then: # dmsetup rename OLD_NAME NEW_NAME # cp -a /dev/mapper/NEW_NAME /dev/mapper/OLD_NAME # update-initramfs -u -k all # rm /dev/mapper/OLD_NAME # update-grub # reboot
My system is fully encrypted with dm-crypt and LVM. I recently moved the encrypted partition from /dev/sda5 to /dev/sda2. My question is: how can I change the name the encrypted partition is mapped to from sda5_crypt to sda2_crypt? I can boot the system all right. But the prompt I get at boot time says (sda5_crypt) though the UUID maps to /dev/sda2: Volume group "vg" not found Skipping volume group vg Unlocking the disk /dev/.../UUID (sda5_crypt) Enter passphrase:I tried to live-boot, decrypt sda2, activate vg, chroot to /dev/vg/root and run update-grub2 but to no avail. Merely editing /etc/crypttab doesn't work either.
How to change the name an encrypted full-system partition is mapped to
Each key slot has its own iteration time. If you want to change the number of iterations, create a new slot with the same passphrase and a new number of iterations, then remove the old slot. cryptsetup -i 100000 --key-slot 2 luksAddKey $device cryptsetup luksKillSlot $device 1I think the hash algorithm cannot be configured per slot, it's always PBKDF2 with a globally-chosen hash function. Recent versions of cryptsetup include a tool cryptsetup-reencrypt, which can change the main encryption key and all the parameters, but it is considered experimental (and it reencrypts the whole device even though this would not be necessary to merely change the password-based key derivation function).
How can I change the hash-spec and iter-time of an existing dm-crypt LUKS device? Clearly I can pass the options if I create a new device, for example something like this: sudo cryptsetup luksFormat --cipher aes-cbc-essiv:sha256 --key-size 256 --iter-time 2100 --hash sha512 /dev/loop0But if the device already exists, how can I change for example sha256 to sha1 or change the iteration time without "destroying" the device. (Clearly you would have to retype your password since a new hash will be generated.)
How to change the hash-spec and iter-time of an existing dm-crypt LUKS device?
It's about online resize. For example if you use LVM, create a LV of 1G size, and put LUKS on that, it's like this: # lvcreate -L1G -n test VG # cryptsetup luksFormat /dev/mapper/VG-test # cryptsetup luksOpen /dev/mapper/VG-test lukstest # blockdev --getsize64 /dev/mapper/VG-test 1073741824 # blockdev --getsize64 /dev/mapper/lukstest 1071644672So the LUKS device is about the same size as the VG-test device (1G minus 2MiB used by the LUKS header). Now what happens when you make the LV larger? # lvresize -L+1G /dev/mapper/VG-test Size of logical volume VG/test changed from 1.00 GiB (16 extents) to 2.00 GiB (32 extents). Logical volume test successfully resized. # blockdev --getsize64 /dev/mapper/VG-test 2147483648 # blockdev --getsize64 /dev/mapper/lukstest 1071644672The LV is 2G large now, but the LUKS device is still stuck at 1G, as that was the size it was originally opened with. Once you luksClose and luksOpen, it would also be 2G — because LUKS does not store a size, it defaults to the device size at the time you open it. So close and open (or simply rebooting) would update the crypt mapping to the new device size. However, since you can only close a container after umounting/stopping everything inside of it, this is basically an offline resize. But maybe you have a mounted filesystem on the LUKS, it's in use, and you don't want to umount it for the resize, and that's where cryptsetup resize comes in as an online resize operation. # cryptsetup resize /dev/mapper/lukstest # blockdev --getsize64 /dev/mapper/lukstest 2145386496cryptsetup resize updates the active crypt mapping to the new device size, no umount required, and then you can follow it up with resize2fs or whatever to also grow the mounted filesystem itself online. If you don't mind rebooting or remounting, you'll never need cryptsetup resize as it happens automatically offline. But if you want to do it online, that's the only way.When shrinking (cryptsetup resize --size x), the resize is temporary. LUKS does not store device size, so next time you luksOpen, it will simply use the device size again. So shrinking sticks only if the backing device was also shrunk accordingly. For a successful shrink you have to work backwards... growing is grow partition first, then LUKS, then filesystem... shrinking is shrink filesystem first, and partition last.If the resize doesn't work, it's most likely due to the backing device not being resized, for example the kernel may refuse changes to the partition table while the drive is in use. Check with blockdev that all device layers have the sizes you expect them to have.
The LUKS / dm-crypt / cryptsetup FAQ page says:2.15 Can I resize a dm-crypt or LUKS partition? Yes, you can, as neither dm-crypt nor LUKS stores partition size.I'm befuzzled:What is "resized" if no size information is stored? How does a "resize" get remembered across open / closes of a encrypted volume?
What does `cryptsetup resize` do if LUKS doesn't store partition size?
I think your testing does not match the documentation (man fstrim).-v, --verbose Verbose execution. With this option fstrim will output the number of bytes passed from the filesystem down the block stack to the device for potential discard. This number is a maximum discard amount from the storage device's perspective, because FITRIM ioctl called repeated will keep sending the same sectors for discard repeatedly. fstrim will report the same potential discard bytes each time, but only sectors which had been written to between the discards would actually be discarded by the storage device. Further, the kernel block layer reserves the right to adjust the discard ranges to fit raid stripe geometry, non-trim capable devices in a LVM setup, etc. These reductions would not be reflected in fstrim_range.len (the --length option).I suggest looking for discard requests using blktrace instead, i.e. at the same time as you run fstrim. Hopefully it will show whether discard requests are being submitted to the block device on the bottom of the stack. You can compare the results between sda1 and sda2 (after a fresh boot, to avoid the undocumented behaviour on sda1). btrace -a discard $DEV
I try to setup Fedora 25 with dm-crypt and LVM, but struggle to make TRIM work. $ sudo fstrim -av /boot: 28.6 MiB (30003200 bytes) trimmed /: 56.5 GiB (60672704512 bytes) trimmed$ sudo fstrim -av /boot: 0 B (0 bytes) trimmed /: 56.5 GiB (60671877120 bytes) trimmedAs you can see from the above output, repeatedly running fstrim works on unencrypted ext4 /boot, but seems to have no effect on / (which is on the same disk). The setup is SSD -> dm-crypt -> LVM -> XFS $ lsblk -D NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO sda 0 512B 2G 0 ├─sda2 0 512B 2G 0 │ └─luks-dd5ce54a-34c9-540c-a4cf-2a712b8a3a5e 0 512B 2G 0 │ └─fedora-root 0 512B 2G 0 └─sda1 0 512B 2G 0According to this question, DISC-ZERO == 0 should not be the problem # cat /etc/crypttab luks-dd... UUID=dd.. none discard# cat /etc/lvm/lvm.conf devices { ... issue_discards = 1 ... }I've also added rd.luks.options=discard option to /etc/default/grub, and updated initramfs and grub.cfg: # grub2-mkconfig -o /boot/grub2/grub.cfg # dracut -fThe discard option did correctly propagate: # dmsetup table luks-d... 0 233385984 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 8:2 4096 1 allow_discardsI've tried variations of the above setup that can be found around the web, but this seems to follow manual pages. What I did not try was using different file system, but XFS should be supported.
fstrim doesn't seem to trim a partition that uses lvm and dm-crypt
In fact, modifying mount is possible, as I learned from the existence of mount.ntfs-3g. I'm doing only guesswork, but I suspect mount -t sometype results in a call to mount.sometype $DEV $MOUNTPOINT $OPTIONS, feel free to correct me here or quote some actual documentation. Especially the option -o loop is already treated so there's no need for lopsetup anymore... Symlink/create the mount script as /sbin/mount.crypto_LUKS. Remove the loopdevice part and instead just use the -o loop switch. Here's my /sbin/mount.crypto_LUKS: #!/bin/bash set -e if [[ $(mount | grep ${2%%/} | wc -l) -gt 0 ]]; then echo "Path $2 is already mounted!" >&2 exit 9 else MAPPER=$(mktemp -up /dev/mapper) cryptsetup luksOpen $1 $(basename $MAPPER) shift mount $MAPPER $* || cryptsetup luksClose $(basename $MAPPER) fiNow I just have to run mount -o loop ~/container /mnt/decrypted, and mount will prompt me for the password and then mount the container, automatically releasing the loopdevice once the container is closed. If the decrypted filesystem fails to mount, the container will be closed again, but you can modify that of course. Or implement some option parsing instead of passing everything on to mount. I was hoping the same could be achieved via /sbin/umount.luks, but umount /mnt/decrypted (even with -t crypto_LUKS) still only does the usual unmount, leaving the container open. If you find a way to have umount call my dm.umount script instead, please let me know... At the moment, directly calling umount is discouraged since you will have to figure out the /dev/mapper name to manually cryptsetup luksClose $MAPPER. At least the loop device will be release automatically if mount -o loop was used before...
I created an encrypted container via #!/bin/bash dd if=/dev/zero of=$1 bs=1 count=0 seek=$2 MAPPER=$(mktemp -up /dev/mapper) LOOPDEV=$(losetup --find --show $1) cryptsetup luksFormat $LOOPDEV cryptsetup luksOpen $LOOPDEV $(basename $MAPPER) mkfs.ext3 $MAPPER cryptsetup luksClose $MAPPER losetup -d $LOOPDEVi.e. a file e.g. container specified to this script will contain a ext3 filesystem encrypted via cryptsetup luksFormat. To mount it, I currently use another script, say dm.mount container /mnt/decrypted: #!/bin/bash set -e MAPPER=$(mktemp -up /dev/mapper) LOOPDEV=$(losetup --find --show $1) cryptsetup luksOpen $LOOPDEV $(basename $MAPPER) || losetup -d $LOOPDEV mount $MAPPER $2 || ( cryptsetup luksClose $MAPPER losetup -d $LOOPDEV )and to unmount it dm.umount /mnt/decrypted: #!/bin/bash set -e MAPPER=$(basename $(mount | grep $1 | gawk ' { print $1 } ')) LOOPDEV=$(cryptsetup status $MAPPER | grep device | gawk ' { print $2 } ') umount $1 cryptsetup luksClose $MAPPER losetup -d $LOOPDEVThere's a lot of redundancy and manually grabbing a loop device and mapper both of which could remain anonymous. Is there a way to simply do something like mount -o luks ~/container /mnt/decrypted (prompting for the passphrase) and umount /mnt/decrypted the easy way instead?edit Basically I am happy with my scripts above (although the error checking could be improved...), soHow can a mount option -o luks=~/container be implemented similar to -o loop ~/loopfile using the scripts I wrote?Can this be achieved without rewriting mount? Or alternatively, could -t luks -o loop ~/container be implemented?
How to mount a cryptsetup container just with `mount`?
For changing the file system UUID you have to decrypt /dev/sda1 and then run tune2fs on the decrypted device mapper device. sda1 itself does not have a UUID thus it cannot be changed. The LUKS volume within sda1 does have a UUID (which is of limited use because you probably cannot use it for mounting), though. It can be changed with cryptsetup luksUUID /dev/sda1 --uuid "$newuuid"
I'm trying to change a partition's UUID, the problem is that I'm trying to change an encrypted volume. So I can't use the usual method described here. Since it throws the following error: tune2fs: Bad magic number in super-block while trying to open /dev/sda1 Couldn't find valid filesystem superblock.So let's suppose this is my blkid: /dev/sda1: UUID="adc4277c-0057-4455-a25e-94dec062571c" TYPE="crypto_LUKS" PARTUUID="23487624-01" /dev/sda2: UUID="9f16a55e-954b-4947-87ce-b0055c6ac953" TYPE="crypto_LUKS" PARTUUID="23487624-02" /dev/mapper/root: LABEL="root" UUID="6d1b1654-016b-4dc6-8330-3c242b2c538b" TYPE="ext4" /dev/mapper/home: LABEL="home" UUID="9c48b8fe-36a6-4958-af26-d15a2a89878b" TYPE="ext4"What I want to change in this example is the /dev/sda1 UUID. How can I achieve this?
Change encrypted partition UUID
1a - it really doesn't matter all that much. which ever hash you use for the key derivation function, LUKS makes sure it will be computationally expensive. It will simply loop it until 1 second real time has passed. 1b - the key derivation method has no influence on performance. the cipher itself does. cryptsetup benchmark shows you as much. 2 - AES is the fastest if your CPU is modern enough to support AES-NI instructions (hardware acceleration for AES). If you go with serpent now you may not be able to utilize the AES-NI of your next laptop. # Tests are approximate using memory only (no storage IO). PBKDF2-sha1 1165084 iterations per second PBKDF2-sha256 781353 iterations per second PBKDF2-sha512 588426 iterations per second PBKDF2-ripemd160 726160 iterations per second PBKDF2-whirlpool 261882 iterations per second # Algorithm | Key | Encryption | Decryption aes-cbc 128b 692.9 MiB/s 3091.3 MiB/s serpent-cbc 128b 94.6 MiB/s 308.6 MiB/s twofish-cbc 128b 195.2 MiB/s 378.7 MiB/s aes-cbc 256b 519.5 MiB/s 2374.0 MiB/s serpent-cbc 256b 96.5 MiB/s 311.3 MiB/s twofish-cbc 256b 197.9 MiB/s 378.0 MiB/s aes-xts 256b 2630.6 MiB/s 2714.8 MiB/s serpent-xts 256b 310.4 MiB/s 303.8 MiB/s twofish-xts 256b 367.4 MiB/s 376.6 MiB/s aes-xts 512b 2048.6 MiB/s 2076.1 MiB/s serpent-xts 512b 317.0 MiB/s 304.2 MiB/s twofish-xts 512b 368.7 MiB/s 377.0 MiB/sKeep in mind this benchmark does not use storage so you should verify these results with whatever storage and filesystem you are actually going to use.
I decided to encrypt my root partition with LUKS+LVM. My ThinkPad setup:Samsung 830 128GB SSD 750GB HDD Core 2 Duo 2,5 GHz P9500 8GB RAMBut the more I read, the less I understand about those two following subjects: 1a. The cipher I was going to use SHA1 instead of 2/512 (as some suggest), because of that quote from cryptsetup FAQ:5.20 LUKS is broken! It uses SHA-1! No, it is not. SHA-1 is (academically) broken for finding collisions, but not for using it in a key-derivation function. And that collision vulnerability is for non-iterated use only. And you need the hash-value in verbatim. This basically means that if you already have a slot-key, and you have set the PBKDF2 iteration count to 1 (it is > 10'000 normally), you could (maybe) derive a different passphrase that gives you the the same slot-key. But if you have the slot-key, you can already unlock the key-slot and get the master key, breaking everything. So basically, this SHA-1 vulnerability allows you to open a LUKS container with high effort when you already have it open. The real problem here is people that do not understand crypto and claim things are broken just because some mechanism is used that has been broken for a specific different use. The way the mechanism is used matters very much. A hash that is broken for one use can be completely secure for other uses and here it is.Which I read as "there is no point of using anything other than SHA-1". But then some people tell me, that it's not exactly like that. So I no longer know what to think. 1b. Also, I could not find any information whether the cipher has any influence on disk read/write/seek performance once the disk is unlocked and system logged into. So does the complexity of the cipher affect only the "performance" on password entering stage, or also during normal use of the system? 2. The algorithm I have been reading on this since couple of days, but the more I read, the more confused I get. Everything I read says that AES is the fastest, and Serpent is the slowest. But not according to my laptop: $ cryptsetup benchmark Tests are approximate using memory only (no storage IO). PBKDF2-sha1 344926 iterations per second PBKDF2-sha256 198593 iterations per second PBKDF2-sha512 129007 iterations per second PBKDF2-ripemd160 271933 iterations per second PBKDF2-whirlpool 134295 iterations per second # Algorithm | Key | Encryption | Decryption aes-cbc 128b 149.8 MiB/s 147.9 MiB/s serpent-cbc 128b 51.0 MiB/s 196.4 MiB/s twofish-cbc 128b 127.6 MiB/s 152.5 MiB/s aes-cbc 256b 114.3 MiB/s 113.8 MiB/s serpent-cbc 256b 51.2 MiB/s 198.9 MiB/s twofish-cbc 256b 129.8 MiB/s 167.5 MiB/s aes-xts 256b 153.3 MiB/s 150.6 MiB/s serpent-xts 256b 176.4 MiB/s 184.1 MiB/s twofish-xts 256b 160.8 MiB/s 159.8 MiB/s aes-xts 512b 115.4 MiB/s 112.1 MiB/s serpent-xts 512b 178.6 MiB/s 184.2 MiB/s twofish-xts 512b 160.7 MiB/s 158.9 MiB/sSo it appears that Serpent's not only the fastest, but on top of that it is the fastest with the most complex key. Shouldn't it be the other way around? Am I reading it wrong, or something?
Trying to understand LUKS encryption
I did a small benchmark. It only tests writes though. Test data is a Linux kernel source tree (linux-3.8), already unpacked into memory (/dev/shm/ tmpfs), so there should be as little influence as possible from the data source. I used compressible data for this test since compression with non-compressible files is nonsense regardless of encryption. Using btrfs filesystem on a 4GiB LVM volume, on LUKS [aes, xts-plain, sha256], on RAID-5 over 3 disks with 64kb chunksize. CPU is a Intel E8400 2x3Ghz without AES-NI. Kernel is 3.8.2 x86_64. The script: #!/bin/bashPARTITION="/dev/lvm/btrfs" MOUNTPOINT="/mnt/btrfs"umount "$MOUNTPOINT" >& /dev/nullfor method in no lzo zlib do for iter in {1..3} do echo Prepare compress="$method", iter "$iter" mkfs.btrfs "$PARTITION" >& /dev/null mount -o compress="$method",compress-force="$method" "$PARTITION" "$MOUNTPOINT" sync time (cp -a /dev/shm/linux-3.8 "$MOUNTPOINT"/linux-3.8 ; umount "$MOUNTPOINT") echo Done compress="$method", iter "$iter" done doneSo in each iteration, it makes a fresh filesystem, and measures the time it takes to copy the linux kernel source from memory and umount. So it's a pure write-test, zero reads. The results: Prepare compress=no, iter 1real 0m12.790s user 0m0.127s sys 0m2.033s Done compress=no, iter 1 Prepare compress=no, iter 2real 0m15.314s user 0m0.132s sys 0m2.027s Done compress=no, iter 2 Prepare compress=no, iter 3real 0m14.764s user 0m0.130s sys 0m2.039s Done compress=no, iter 3 Prepare compress=lzo, iter 1real 0m11.611s user 0m0.146s sys 0m1.890s Done compress=lzo, iter 1 Prepare compress=lzo, iter 2real 0m11.764s user 0m0.127s sys 0m1.928s Done compress=lzo, iter 2 Prepare compress=lzo, iter 3real 0m12.065s user 0m0.132s sys 0m1.897s Done compress=lzo, iter 3 Prepare compress=zlib, iter 1real 0m16.492s user 0m0.116s sys 0m1.886s Done compress=zlib, iter 1 Prepare compress=zlib, iter 2real 0m16.937s user 0m0.144s sys 0m1.871s Done compress=zlib, iter 2 Prepare compress=zlib, iter 3real 0m15.954s user 0m0.124s sys 0m1.889s Done compress=zlib, iter 3With zlib it's a lot slower, with lzo a bit faster, and in general, not worth the bother (difference is too small for my taste, considering I used easy-to-compress data for this test). I'd make a read test also but it's more complicated as you have to deal with caching.
Encryption/decryption is often the main bottleneck when accessing an encrypted volume. Would using a filesystem with a fast transparent compression (such as BTRFS + LZO) help? The idea is that there would be less data to encrypt, and if the compression is significantly faster than the encryption algorithm, the overall processing time would be less. Update: As Mat pointed out, it depends on the compressibility of the actual data. Of course, I assume that its compressible, like source code or documents. Of course it has no meaning using it for media files (but I guess it won't hurt too much, as BTRFS tries to detect incompressible files.) Since testing this idea is a very time consuming process, I'm asking if somebody has already some experience with this. I tested just a very simple setup, and it seems to show a difference: $ touch BIG_EMPTY $ chattr +c BIG_EMPTY $ sync ; time ( dd if=/dev/zero of=BIG_EMPTY bs=$(( 1024*1024 )) count=1024 ; sync ) ... real 0m26.748s user 0m0.008s sys 0m2.632s$ touch BIG_EMPTY-n $ sync ; time ( dd if=/dev/zero of=BIG_EMPTY-n bs=$(( 1024*1024 )) count=1024 ; sync ) ... real 1m31.882s user 0m0.004s sys 0m2.916s
Will using a compressed filesystem over an encrypted volume improve performance?
It doesn't seem to be possible with the cryptsetup command. Unfortunately cryptsetup has a few such immutable flags... --allow-discards is also one of them. If this wasn't set at the time you opened the container, you can't add it later. At least, not with the cryptsetup command. However, since cryptsetup creates regular Device Mapper targets, you can resort to dmsetup to modify them. Of course, this isn't recommended for various reasons: it's like changing the partition table of partitions that are in use - mess it up and you might lose all your data. The device mapper allows dynamic remapping of all devices at runtime and it doesn't care about the safety of your data at all; which is why this feature is usually wrapped behind the LVM layer which keeps the necessary metadata around to make it safe. Create a read-only LUKS device: # truncate -s 100M foobar.img # cryptsetup luksFormat foobar.img # cryptsetup luksOpen --read-only foobar.img foobarThe way dmsetup sees it: # dmsetup info foobar Name: foobar State: ACTIVE (READ-ONLY) Read Ahead: 256 Tables present: LIVE [...] # dmsetup table --showkeys foobar 0 200704 crypt aes-xts-plain64 ef434503c1874d65d33b1c23a088bdbbf52cb76c7f7771a23ce475f8823f47df 0 7:0 4096Note the master key which normally shouldn't be leaked, as it breaks whatever brute-force protection LUKS offers. Unfortunately I haven't found a way without using it, as dmsetup also lacks a direct --make-this-read-write option. However dmsetup reload allows replacing a mapping entirely, so we'll replace it with itself in read-write mode. # dmsetup table --showkeys foobar | dmsetup reload foobar # dmsetup info foobar Name: foobar State: ACTIVE (READ-ONLY) Read Ahead: 256 Tables present: LIVE & INACTIVEIt's still read-only after the reload, because reload goes into the inactive table. To make the inactive table active, use dmsetup resume: # dmsetup resume foobar # dmsetup info foobar Name: foobar State: ACTIVE Read Ahead: 256 Tables present: LIVEAnd thus we have a read-write LUKS device. Does it work with a live filesystem? # cryptsetup luksOpen --readonly foobar.img foobar # mount /dev/mapper/foobar /mnt/foobar mount: /mnt/foobar: WARNING: device write-protected, mounted read-only. # mount -o remount,rw /mnt/foobar mount: /mnt/foobar: cannot remount /dev/mapper/foobar read-write, is write-protected.So it's read-only. Make it read-write and remount: # dmsetup table --showkeys foobar | dmsetup reload foobar # dmsetup resume foobar # mount -o remount,rw /mnt/foobar # echo hey it works > /mnt/foobar/amazing.txtCan we go back to read-only? # mount -o remount,ro /mnt/foobar # dmsetup table --showkeys foobar | dmsetup reload foobar --readonly # dmsetup resume foobar # mount -o remount,rw /mnt/foobar mount: /mnt/foobar: cannot remount /dev/mapper/foobar read-write, is write-protected.So it probably works. The process to add allow_discards flag to an existing crypt mapping is similar - you have to reload with a table that contains this flag. However a filesystem that already detected the absence of discard support, might not be convinced to re-detect this on the fly. So it's unclear how practical it is.Still, unless you have very good reason not to, you should stick to re-opening using regular cryptsetup commands, even if it means umounting and re-supplying the passphrase. It's safer all around and more importantly, doesn't circumvent the LUKS security concept.
cryptsetup can be invoked with --readonly or -r option, which will set up a read-only mapping: cryptsetup --readonly luksOpen /dev/sdb1 sdb1Once I have opened a device as read-only, can I later re-map it to read-write? Obviously, I mean mapping it read-write without closing it first, and then opening it again. Can I remap it without having to type my password again? If this is not possible, is this just that cryptsetup does not support this, or is there some more fundamental level?
remap read-only LUKS partition to read-write
It depends a little on the distribution you are using and what components are included by dracut in the initramfs. For example, the cryptdevice= option is interpreted by the encrypt hook. Thus, it's only relevant for initramfs images that include this hook. The disadvantage of rd.luks.allow-discards and rd.luks.allow-discards= is that it simply doesn't work. The dracut.cmdline(7) description of these options is incorrect. I tested it under Fedora 26 where it doesn't work and there is even a bug report for Fedora 19 where this deviation between documented and actual behavior was discussed and it was closed as wont-fix. The luks.options= and rd.luks.options= are more generic as you basically can place any valid crypttab option in there, e.g. discard. Since they are interpreted by systemd-cryptsetup-generator which doesn't care about cryptdevice= you can't expect a useful interaction between these options. Note that luks.options= only has an effect for devices that aren't listed in the initramfs image's etc/crypttab file. Thus, to enable dm-crypt pass-though SSD trim support (a.k.a. discard) for dm-crypted devices opened during boot you have 2 options:add rd.luks.options=discard to the kernel command line and make sure that the initramfs image doesn't include a etc/crypttab add the discard option to the relevant entries in /etc/crypttab and make sure that the current version is included in the initramfs image.You can use lsinitrd /path/to/initramfs etc/crypttab for checking the initramfs image, dracut -v -f /path/to/initramfs-image for regenerating the image after changes to /etc and dmsetup table to see whether the crypted device was actually opened with the discard option (the relevant entries should include the string allow_discards then).
I'm confused between the various ways that LUKS/dmcrypt/cryptsetup discard /TRIM operations can be enabled via the Linux kernel command line.The dracut manpage:rd.luks.allow-discards Allow using of discards (TRIM) requests on all LUKS partitions.The systemd-cryptsetup-generator manpageluks.options=, rd.luks.options= ... If only a list of options, without an UUID, is specified, they apply to any UUIDs not specified elsewhere, and without an entry in /etc/crypttab. ...The argument rd.luks.options=discard is recommended here. The Arch wiki section on LUKS and SSDs shows a third colon-seprated field:cryptdevice=/dev/sdaX:root:allow-discardsQuestions:What is the difference between discard and allow-discards? Is the former mandatory and the second optional? Will luks.options= or rd.luks.options= apply given cryptdevice=/dev/sda2 (eg not a UUID)? What if cryptdevice= is given a UUID, does that count as "specified elsewhere"? Will luks.options= or rd.luks.options= overwrite / append / prepend if cryptsetup= already gives options? Is there any disadvantage to using rd.luks.allow-discards which seems to be simplest if TRIM is wanted everywhere?
LUKS discard/TRIM: conflicting kernel command line options
Yes, of course. The vendor can just keep the master key. A backup of the LUKS header. As this key never changes even as you change the password, it allows full access to all the data. So you are entirely depending on trust here. Backdoors and everything else just come on top of that. In addition to the manpage, the Cryptsetup FAQ is a good read: http://code.google.com/p/cryptsetup/wiki/FrequentlyAskedQuestions It covers all sorts of loopholes. Your question is answered there as well in 6.7 Does a backup compromise security? I'm not a vendor, but in that case I would consider keeping a backup of the master key a service. If I sell to non-computer-savvy people, they may forget their passwords, and come to me for help. And having the master key is the only way to help in such a case, short of bruteforcing LUKS which is only possible if you can narrow it down to a few million possibilities (i.e. if you more or less know your passphrase but do not know which variant exactly you used). Of course if you do this you should be honest about it. If you want to be able to trust a system, you always have to set it up by yourself. This includes image install services in data centers and all other sorts of things.
Suppose I want an Ubuntu setup (let it be 12.04) with full disk encryption. Ubuntu offers this as an option during installation. Suppose also that I have somebody (e.g. my vendor) else set this up for me and that I only get the finished system including the predefined encryption password (and of course all other passwords). Is there any way that this person can reliably retain access to the encrypted data on the device even when I change the encryption password (and possibly other passwords)? (Note that I'm not talking about normal backdoors like installing a SSH key or a root shell or patching some daemon to receive commands from outside. I am talking specifically about backdoors that target the full disk encryption and possibly cannot be closed without having to reinstall or re-encrypt the whole system.) For example, with TrueCrypt it is (or was) possible to save the first sectors of an encrypted harddisk, because it only uses the password to encrypt the master key (which is then used to encrypt the data) - and if you replace both password and encrypted master key by replacing those sectors after a password change, you can practically undo an encryption password change. Is something like this possible with dm-crypt/LUKS? If not, anything similar?
Security of an encrypted (dm-crypt & LUKS) Ubuntu 12.04 installed by somebody else?
You've probably forgotten to include the required cryptdevice mapped name in the kernel command line parameter. I had: cryptdevice=/dev/sdaX However, the second colon-separated field is mandatory, eg: cryptdevice=/dev/sdaX:root If you're using an SSD, and have understood the implications, for increased performance you may want to use: cryptdevice=/dev/sdaX:root:allow-discards
At boot I see: :: running hook [encrypt]A password is required to access the volume: Command requires device and mapped name as arguments Command requires device and mapped name as arguments Command requires device and mapped name as argumentsThe final message repeats every second. There is no opportunity for me to enter a password. I am running Manjaro, based upon Arch. What am I doing wrong?
LUKS password not being requested by dmcrypt / encrypt hook at boot
If you're already using the new LUKS2 format, you can set a label: For new LUKS2 containers: # cryptsetup luksFormat --type=luks2 --label=foobar foobar.img # blkid /dev/loop0 /dev/loop0: UUID="fda16145-822e-405c-9fe8-fe7e7f0ddb5e" LABEL="foobar" TYPE="crypto_LUKS"For existing LUKS2 containers: # cryptsetup config --label=barfoo /dev/loop0 # blkid /dev/loop0 /dev/loop0: UUID="fda16145-822e-405c-9fe8-fe7e7f0ddb5e" LABEL="barfoo" TYPE="crypto_LUKS"However, it's not possible to set a label for the more common LUKS1 header.With LUKS1, you can only set a label on a higher layer. For example, if you are using GPT partitions, you can set a PARTLABEL. # parted /dev/loop0 (parted) name 1 foobar (parted) print Model: Loopback device (loopback) Disk /dev/loop0: 105MB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 104MB 103MB foobarThis sets the partition label of partition 1 to "foobar". You can identify it with PARTLABEL=foobar or find it in /dev/disk/by-partlabel/ # ls -l /dev/disk/by-partlabel/foobar lrwxrwxrwx 1 root root 13 Oct 10 20:10 /dev/disk/by-partlabel/foobar -> ../../loop0p1Similarly, if you use LUKS on top of LVM, you could go with VG/LV names.As always with labels, take extra care to make sure each label doesn't exist more than once. There's a reason why UUIDs are meant to be "universally unique". You get a lot of problems when trying to use the wrong device; it can even cause data loss (e.g. if cryptswap formats the wrong device on boot).
I'm running Arch Linux with systemd boot. In /boot/loader/entries/arch.conf I currently specify the luks crypto device with a line like this: options rw cryptdevice=/dev/sda1:ABC root=/dev/mapper/ABCI know I can also use UUID instead of /dev/sda1. In that case the kernel options line would look like this: options rw cryptdevice=UUID=1f5cce52-8299-9221-b2fc-19cebc959f51:ABC root=/dev/mapper/ABCHowever, can I instead use either a partition label or a volume label or any other kind of label? If so, what is the syntax?
How to specify cryptdevice by label using systemd boot?
From the ArchLinux encrypt hook (/lib/initcpio/hooks/encrypt): *) # Read raw data from the block device # ckarg1 is numeric: ckarg1=offset, ckarg2=length dd if="$resolved" of="$ckeyfile" bs=1 skip="$ckarg1" count="$ckarg2" >/dev/null 2>&1 ;;So while it supports reading a key from a raw block device, it uses a blocksize of 1 (instead of the default 512), so you have to multiply your values by 512 to make it work. So instead of cryptkey=/dev/sdd:1:6 try cryptkey=/dev/sdd:512:3072.
I'm using arch linux with an encrypted luks root partition (boot unencrypted), with a passphrase yet. Now I have a keyfile (3072 bytes), that's written to USB-Stick this way: sudo dd if=tempKeyFile.bin of=/dev/sdd bs=512 seek=1 count=6 and also set as additional pass sudo cryptsetup luksAddKey /dev/sdb6 tempKeyFile.bin When I open the partition manually with: sudo cryptsetup --key-file tempKeyFile.bin open /dev/sdb6 luks_root everything works, the partition is mapped and can be mounted. Now my kernel-parameter-line in grub.cfg looks like this: linux /vmlinuz-linux root=UUID=$UUID_OF_luks_root$ rw cryptdevice=UUID=$UUID_OF_sdb6$:luks_root cryptkey=/dev/sdd:1:6 But when booting, I get this error: No key available with this passphrase. Invalid Keyfile. Reverting to passphrase. I already tried offset 2 instead of 1, but same result. I noticed it doesn't say, that the keyfile could not be found/read, but was incorrect. There seems to be little documentation about this way of storing luks keyfile. Arch-wiki mentions it, but very briefly and I seem to be conform, so I think it should be possible. in my mkinitcpio.conf MODULES, BINARIES and FILES are empty and I set: HOOKS=(base udev autodetect keyboard modconf block encrypt filesystems fsck) so block is right before encrypt. What's the problem here?
Using space before 1st partition of USB-Stick as luks key
If there is corruption in the LUKS header (more than just a single byte), it's pretty much impossible to recover. The LUKS header does not have a checksum for its key material, so - if it's damaged in any way, the cryptsetup luksDump will look same as always, but your passphrase simply won't work anymore. If you're unable to make the passphrase work, it's not possible to rule out corruption. You could check it out with hexdump (manual approach to keyslot checker): hexdump -C -n 132096 foobar.img | less00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 78 74 73 2d 70 6c 61 69 |........xts-plai| ... 0x00000->0x01000 should be mixed text, zero and random ... 00001000 9f 27 7a 46 8b c7 0e 09 00 82 2d 66 a7 4b b7 76 |.'zF......-f.K.v| 00001010 7a 01 ed 65 91 d0 96 af 3c f1 85 0d 64 48 81 e7 |z..e....<...dH..| 00001020 3a 00 0d d1 23 e0 95 d2 8e 42 34 4d e2 74 c4 d6 |:...#....B4M.t..| ... 0x01000->0x20400 should be 128K of random only ... 000203d0 b6 04 f6 34 08 64 10 3f 4e b7 c4 21 e6 d8 da 56 |...4.d.?N..!...V| 000203e0 0e eb 53 ce d2 a6 94 f0 92 7b 11 4b c1 96 9f 17 |..S......{.K....| 000203f0 94 88 b4 cd 36 a5 e1 b2 e9 ba 27 f3 85 7d cb 3f |....6.....'..}.?| 00020400The first segment is what luksDump shows, only parts of it are random. The range 00001000..00020400 is the key material for Key Slot 0, this should look random throughout, if there is any segment of that zeroed out or otherwise distinctly lacking in randomness (like a wild plain text string appearing), the header is corrupt. If you're not using the US layout, try that and whatever layout you usually use. Keyboard layout problems are also a common reason for passphrases to stop working. In this case it helps to add the same passphrase multiple times (one for each layout) so LUKS will accept it, regardless which layout is currently active.
One day, when I turned on the computer, my passphrase for home part /dev/sda7 doesn't worked (I am 147% absolutely sure, that I was writing right pass)! After three times of tries, I have rebooted computer via force shutdown and tried to enter the same pass. That didn't worked. Then instead of default boot "Boot arch" I have chosen "Boot arch with Linux linux". And it helped me. I was working all day and after turned off computer. But at the next boot, this trick didn't help me. Even choosing of "Boot arch with Linux linux (initramfs fallback)" (I have only 3 chooses of boot). Then I decided to boot from Ubuntu LiveUSB. sudo cryptsetup luksOpen /dev/sda7 home saying: No key available with this passphrase. I have tried to execute sudo cryptsetup --verbose repair /dev/sda7, which said No known problems detected for LUKS header.. I have compiled and executed official cryptsetup tool https://gitlab.com/cryptsetup/cryptsetup/tree/master/misc/keyslot_checker for checking keyslot. It said the same information about keyslots, that saying luksDump. $ sudo cryptsetup LUKS header information for /dev/sda7Version: 1 Cipher name: aes Cipher mode: xts-plain64 Hash spec: sha256 Payload offset: 4096 MK bits: 256 MK digest: fc 18 49 fe 3a 4e d4 11 b9 6f 0c c7 1d 54 0a 8d 44 01 86 36 MK salt: 5e 59 c8 fc f2 a9 10 b9 bf 7c 68 4b e4 a5 8e 00 5a f9 c7 66 f9 5b 02 ff e7 59 e4 fd 43 f2 dc b5 MK iterations: 249500 UUID: cc2f71c3-f0d9-4642-bf59-87bff4f60b54Key Slot 0: ENABLED Iterations: 1996099 Salt: 3e 60 e7 14 02 95 89 c0 c2 bf 8d 61 bb 99 13 aa 9d 9a c4 7d d4 41 78 ee 76 b0 48 b4 ed b0 ff a8 Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLEDAll of that looks like everything is ok. Like header and all partition was not damaged. I have no idea why passphrase doesn't fit. All I can say is that I am fully upgrading my system (via sudo pacman -Syyu) everyday. And probably at one day somehow upgrade caused this consequences.
LUKS passphrase doesn't work
LUKS by default uses 2 MiB for its header, mainly due to data alignment reasons. You can check this with cryptsetup luksDump (Payload offset: in sectors). If you don't care about alignment, you can use the --align-payload=1 option. As for ext4, it's complicated. Its overhead depends on the filesystem size, inode size, journal size and such. If you don't need a journal, you might prefer ext2. It may be that other filesystems have less overhead than ext*, might be worth experimenting. Also some of the mkfs flags (like -T largefile or similar) might help, depending on what kind of files you're putting on this thing. E.g. you don't need to create the filesystem with a million inodes if you're only going to put a dozen files in it. If you want the container to be minimal size, you could start out with a larger container, and then use resize2fs -M to shrink it to the minimum size. You can then truncate the container using that size plus the Payload offset: of LUKS. That should be pretty close to small, if you need it even smaller, consider using tar.xz instead of a filesystem. While tar isn't that great for hundreds of GB of data (need to extract everything to access a single file), it should be okay for the sizes you mentioned and should be smaller than most filesystems...
I want to encrypt the content of a directory in a container with an ext4 filesystem using cryptsetup. The size of the container should be as small as possible and as big as necessary, because I only want to write once and then backup. First try: setting the size of the container to the size of the content. dirsize=$(du -s -B512 "$dir" | cut -f 1) dd if=/dev/zero of=$container count=$dirsize losetup /dev/loop0 $container fdisk /dev/loop0 # 1 Partition with max possible size cryptsetup luksFormat --key-file $keyFile /dev/loop0 cryptsetup luksOpen --key-file $keyFile /dev/loop0 container mkfs.ext4 -j /dev/mapper/container mkdir /mnt/container mount /dev/mapper/container /mnt/container rsync -r "$dir" /mnt/containerRsync returns that there is not enough space for the data. Seems reasonable as there has to be some overhead for the encryption and the file system. I tried it with a relative offset: dirsize=$(($dirsize + ($dirsize + 8)/9))This fixes the problem for dirs with > 100 MB, but not for dirs with < 50 MB. How can I determine the respective amount of bytes the container has to be bigger than the directory?
How much storage overhead comes along with cryptsetup and ext4?
Linux system backup When targeting a true full system backup, disk image backup (as asked) offer substantial advantage (detailed bellow) compared to files based backup. With files based backup disk/partition structure is not saved; Most of the time for a full restore, the process is a huge time consumer in fact many time consuming steps (like system reinstall) are required; and finally backing up installed applications can be tricky; Image disk backup avoid all these cons and restore process is a one shot step. Tools like clonezilla, fsarchiver are not suitable for this question because they are missing one or multiple requested features. As a reminder, luks encrypted partition are not dependent on the used file system (ext3/ext4/etc.) keep in mind that the performance are not the same depending on the chosen file system (details), also note that btrfs (video-1, video-2) may be a very good option because of its snapshot feature and data structure. This is just an additional protection layer because btrfs snapshot are not true backups! (classic snapshots reside on the same partition). As a side note, in addition to disk image backup we may want to do a simple file sync backup for some particular locations, to achieve this, tools like rsync/grsync (or btrfs-send in case of btrfs) can be used in combinaison with cron (if required) and an encrypted backup destination (like luks-partition/vault/truecrypt). Files based backup tools can be: rsync/grsync, rsnapshot, cronopete, dump/restore, timeshift, deja-dup, systemback, freefilesync, realtimesync, luckybackup, vembu.Annotations lsblk --fs output: sda is the main disk sda1/sda2 are the encrypted partitions crypt_sda1/crypt_sda2 virtual (mapped) un-encrypted partitions sda ├─sda1 crypto_LUKS f3df6579-UUID... │ └─crypt_sda1 ext4 bc324232-UUID... /mount-location-1 └─sda2 crypto_LUKS c3423434-UUID... └─crypt_sda2 ext4 a6546765-UUID... /mount-location-2Method #1 Backup the original luks disk/partition (sda or sda1) encrypted as it is to any locationbdsync/bdsync-manager is an amazing tool that can do image backup (full/incremental) by fast block device syncing; This can be used along with luks directly on the encrypted partition, incremental backups works very well in this case as well. This tool support mounting/compression/network/etc. dd: classic method for disk imaging, can be used with command similar to dd if=/dev/sda1 of=/backup/location/crypted.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck.Cons for #1: backup size, compression, and incremental backups can be tricky Method #2 This method is for disk without encryption or to backup the mapped luks un-encrypted partition crypt_sda1/crypt_sda2... An encrypted backup destination location (like luks-partition/vault/truecrypt) or an encrypted archive/image if the backup tool support such feature is recommended.Veeam: free/paid professional backup solution (on linux only command line and TUI), kernel module is opensource, this tool can not be used for the fist method, backup can be encrypted, incremental and mounting backups are supported. bdsync/bdsync-manager same as in the first method but the backup is made from the un-encrypted mapped partition (crypt_sda1/crypt_sda2). dd: classic method for disk imaging, can be used with command similar to dd if=/dev/mapper/crypt_sda1 of=/backup/location/un-encrypted-sda1.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck.Cons for #2: disk headers, mbr, partitions structure, uid etc. are not saved additional backup steps (detailed bellow) are required for a full backupBackup luks headers: cryptsetup luksHeaderBackup /dev/sda1 --header-backup-file /backup/location/sda1_luks_heanders_backup Backup mbr: dd if=/dev/sda of=/backup/location/backup-sda.mbr bs=512 count=1 Backup partitions structure: sfdisk -d /dev/sda > /location/backup-sda.sfdisk Backup disk uuidNote:Images done with dd can be mounted with commands similar to: fdisk -l -u /location/image.img kpartx -l -v /location/image.img kpartx -a -v /location/image.img cryptsetup luksOpen /dev/mapper/loop0p1 imgroot mount /dev/mapper/imgroot /mnt/backup/Alternatives:Bareos: open source backup solution (demo-video) Bacula: open source backup solution (demo-video) Weresync: disk image solution with incremental feature. Other tools can be found here, here, here or here There is a Wikipedia page comparing disk cloning software An analyse by Gartner of some professional backup solutions is available hereOther toolsAcronis backup may be used for both methods but their kernel module is always updated very lately (not working with current/recent kernel version) plus mounting backups is not working as of 02/2020. Partclone: used by clonezilla, this tool only backup disk used blocks, it support image mounting but does not support live/hot backup nor encryption/luks. Partimage: dd alternative with a TUI, it support live/hot backups but images can not be mounted and it does not support luks (but ext4/btrfs). Doclone: very nice live/hot backup imaging solution, supporting many systems (but not lucks...) ext4 etc. support network, mounting is not possible. Rsnapshot: snapshot file backup system using rsync. used in many distro (like mageia) the backup jobs are scheduled with cron, when running in background the backup status is not automatically visible. Rsync/Grsync: sync folders with rsync command, grsync is the gui... Cronopete: file backup alternative to rsync (the application is limited on how it work compared to modern solution) Simple-backup: file backup solution with tray icon and incremental feature, backup are made to tars archives Backintime: python backup app for file based backup (the app have many unsolved issues) Shadowprotect: acronis alternative with mount feature... luks support is not obvious. Datto: professional backup solution, luks support is not obvious, linux agent need to be networked to a backup server... kernel module is opensource on github... the interface is web based without using a modern design. FSArchiver: live/hot image backup solution, backup can not be mounted. Dump: image backup system, mount is not supported.
Disk/Partition Backup What are the backup options and good practice to make a solid and easy to use full system backup? With the following requirement:Live backup Image backup Encrypted backup Incremental backups Mount/access the backup disk/files easily Full system backup, restorable in one shot Can be scheduled automatically (with cron or else) Encrypted or classic backup source (luks, dm-crypt, ext3/ext4/btrfs).
Serious backup options for linux disk (dmcrypt, luks, ext4, ext3, btrfs) normal and encrypted system
Evidently, I didn't create a /etc/crypttab file. Create one, then update-initramfs -u to fix the issue.
I've created an encrypted root partition using LUKS which contains a few LVM partitions. I can't boot and get the following output on startup: Begin mounting root file system ... Begin: Running /scripts/local-top ... /scripts/local-top/cryptroot: line 1: /sbin/cryptsetup: not foundIt still prompts me for the password: Unlocking the disk /dev/sda5 (macbookcrypt) Enter passphrase: ******************************* cryptsetup: cryptesetup failed, bad password or options? /scripts/local-top/cryptroot: line 1: /sbin/cryptsetup: not foundYet it fails every time. My boot command-line is: vmlinuz-3.13.0-37 generic ro root=/dev/mapper/macbooklvm-root cryptopts=target=macbookcrypt,source=/dev/sda5,lvm=macbooklvm recovery initrd=\initrd.img-3.13.0-37-genericI've added "dm_crypt" to /etc/modules and then did update-initramfs to regenerate with dm_crypt included. I'm on Ubuntu 14.04 by the way. In the initramfs shell, I can't seem to locate cryptsetup anywhere: (initramfs) cat /proc/modules | grep crypt dm_crypt 23177 0 - Live 0xffffffffa0006000 (initramfs) find / -iname "cryptsetup" (initramfs)It appears that the dm_crypt module is loaded', so that's good, but cryptsetup isn't present here. How do I install it to my Linux boot? Does it need to be included in initrd, vmlinuz, or System files somehow? I'm new to this hackery.
/sbin/cryptsetup not found on boot
I found out that that you can make a custom initramfs module with mkinitcpio that prints out such information. Ensure you follow this correctly, otherwise your kernel will panic. To do so, you can create files under:/usr/lib/initcpio/hooks/MODULENAME /usr/lib/initcpio/install/MODULENAME/usr/lib/initcpio/install/MODULENAME This is a bash script that helps build the module when you regenerate initramfs with mkinitcpio. It must have build() and help() functions. The build function calls an add_runscript command which adds our runtime bash file of the same name under: /usr/lib/initcpio/hooks/MODULENAME. build() { add_runscript }/usr/lib/initcpio/hooks/MODULENAME This is a bash script that is run when initramfs is loaded. Any commands you would like to be run must be in a function called run_hook()run_hook() { # note this environment is limited as our drive is encrypted # only core system commands will be available # it is possible to add more commands to the initramfs environment echo "hello!!" }Add hook to mkinitcpio.conf Now we add the hook to the array in our mkinitcpio configuration file located at /etc/mkinitcpio.conf # we put in the custom hook # we put it before our encrypt hook!! # so it shows before our password prompt HOOKS=(base udev autodetect modconf kms keyboard MODULENAME encrypt lvm2 keymap consolefont block filesystems fsck)Regenerate mkinitcpio finally we can regenerate our initramfs so that this module can loaded on next boot. $~ sudo mkinitcpio -p linuxCheck the output for any errors before rebooting to check -- and pray for no kernel panic!
I have full disk encryption on my arch linux laptop. When i power on the machine it prompts me for my disk password. My system is encrypted by following the LVM on luks archwiki page. the prompt says something like "a password is required for the cryptlvm volume" i would like to change this to feature some imformation about the system like the owner and an address to return it to if lost. So far i have just tried to look at the arch wiki and search to see if anyone else had asked anything similar but i cannot seem to find anything.
custom prompt for system encryption password entry on startup
The short answer is that encrypted volumes are not really more at risk. The encrypted volumes have a single point of failure in the information at the beginning of the volumes that maps the password (or possibly several passwords for systems like LUKS) to the encryption key for the data. (That is why it is a good idea to encrypt a partition and not a whole disc, so that accidental partitioning doesn't overwrite this data). This data does, AFAIK, not have to be updated unless one of the passwords changes, so the chances of it getting corrupted because a shutdown while it is half-written are low. The rest of the disc is normally accessible with the encryption key retrieved from the above information block. Normal filesystem recovery is possible given that key.You can make a backup of the LUKS header. But you have to realise that when you do and change one of the passwords later on (e.g. because it was compromised), that the backup "uses" the old passwords.
I have some encrypted volumes that I use with my Xubuntu machine. One volume is a container file that is mapped to /dev/loop0 and encrypted using plain dm-crypt; another volume is a USB hard drive encrypted using dm-crypt/LUKS. What I'd like to know is what would happen if I accidentally shut down the computer without unmounting and unmapping these volumes? Is it any more risky than if the volumes weren't encrypted? Similarly, what would happen if I had to hard-reboot the machine without unmapping the volumes, because the system froze for example?
Are encrypted volumes more vulnerable to power loss?
Not 100% I understand what you mean by mapping but, Yes this seems normal. You need to add the device to either /etc/crypttab or /etc/fstab like you would to mount any other drive. https://wiki.archlinux.org/index.php/Dm-crypt/System_configuration#crypttab ^ Should have the information you're looking for.
I notice that if a device mapping is created with the low-level dmsetup or through ioctls, the device mapping will no longer be there after reboot.Is this normal? I am using a USB to test out dm_crypt If it is normal, how do I go about making the mapping stay around? Do I need to look into udev?Thanks!Edit for clarification What I mean by device mapping is the table entry that specifies how to map each range of physical block sectors to a virtual block device. You can see what I mean, if using LVM, with the dmsetup table command. This will dump all current device table mappings. Here's an example for the device mapping linear target, tying two disks together into a LVM swap (physical block abstraction): vg00-lv_swap: 0 1028160 linear /dev/sdb 0 vg00-lv_swap: 1028160 3903762 linear /dev/sdc 0The format here is: <mapping_name>: <start_block> <segment_length> <mapping_target> <block_device> <offset> Where:mapping_name: the name of the virtual device start_block: starting block for virtual device segment_length: length in sectors (512 byte chunks) mapping_target: device mapping target such as linear, crypt, or striped block_device: which physical block device to use, in this case defined by major:minor offset: offset on physical block device My problem is that, after creating a new entry in the device mapping table, it disappears after boot. That is, running something like: dmsetup create TestEncrypted --table "0 $(blockdev --getsz /dev/sdb) crypt serpent-cbc-essiv:sha256 a7f67ad...ee 0 /dev/sdb 0"and then rebooting causes the mapping table entry to disappear (i.e. doesn't show up with dmsetup table), as well as the corresponding /dev/mapper/TestEncrypted
How to make device mappings stay after reboot?