This change adds a struct fdisk_column to provide generic description
for information about partitions. The struct is used for tt tables as
well as lists of possible columns for specified label driver.
We use the same concept in all applications linked with tt.c (lsblk,
findmnt, partx, ...) where is possible to dynamically change columns,
order of the columns etc. Now it will be possible to do the same with
fdisk.
And it's also possible to use FDISK_COL_* Ids to address data, for
example:
fdisk_partition_get_data(cxt, FDISK_COL_SIZE, 1, &data);
returns a string with human readable size (<num>{MGT}) of the second
partition.
Signed-off-by: Karel Zak <kzak@redhat.com>
[kzak@redhat.com: - move paths to pathnames.h,
- use static path buffer]
Signed-off-by: Ondrej Oprala <ooprala@redhat.com>
Signed-off-by: Karel Zak <kzak@redhat.com>
In sys-utils/hwclock.c, set_hardware_clock_exact() has some problems when the
process gets pre-empted (for more than 100ms) before reaching the time for
which it waits:
1. The "continue" statement causes execution to skip the final tdiff
assignment at the end of the do...while loop, leading to the while condition
using the wrong value of tdiff, and thus always exiting the loop once
newhwtime != sethwtime (e.g., after 1 second). This masks bug # 2, below.
2. The previously-existing bug is that because it starts over waiting for the
desired time whenever two successive calls to gettimeofday() return values >
100ms apart, the loop will never terminate unless the process holds the CPU
(without losing it for more than 100ms) for at least 500ms. This can happen
on a heavily loaded machine or on a virtual machine (or on a heavily loaded
virtual machine). This has been observed to occur, preventing a machine from
completing the shutdown or reboot process due to a "hwclock --systohc" call in
a shutdown script.
The new implementation presented in this patch takes a somewhat different
approach, intended to accomplish the same goals:
It computes the desired target system time (at which the requested hardware
clock time will be applied to the hardware clock), and waits for that time to
arrive. If it misses the time (such as due to being pre-empted for too long),
it recalculates the target time, and increases the tolerance (how late it can
be relative to the target time, and still be "close enough". Thus, if all is
well, the time will be set *very* precisely. On a machine where the hwclock
process is repeatedly pre-empted, it will set the time as precisely as is
possible under the conditions present on that particular machine. In any
case, it will always terminate eventually (and pretty quickly); it will never
hang forever.
[kzak@redhat.com: - tiny coding style changes]
Signed-off-by: Chris MacGregor <chrismacgregor@google.com>
Signed-off-by: Karel Zak <kzak@redhat.com>
* libmount/src/utils.c (BTRFS_TEST_MAGIC): Conditionally add define
which is used since commit v2.24-243-g6a52473.
Signed-off-by: Bernhard Voelker <mail@bernhard-voelker.de>
The code currently always return EXIT_SUCCESS, that's strange. It
seems better to return 0 on success, 1 on complete failure and 64 on
partial success.
Signed-off-by: Karel Zak <kzak@redhat.com>
The _DEPENDENCIES has to be used for dependencies on another in-tree
files, but _LIBADD is to specify additional libs (including external
libs).
Reported-by: oleid <notifications@github.com>
Signed-off-by: Karel Zak <kzak@redhat.com>
The previous-file command is not :P but :p, and the back-to-where
command is not an acute accent but an apostrophe. Also condense
some of the descriptions and remove some useless comments.
Signed-off-by: Benno Schulenberg <bensberg@justemail.net>
This feature is hopefully mostly used to give MESSAGE_ID labels for
messages coming from scripts, making search of messages easy. The
logger(1) manual page update should give enough information how to use
--journald option.
[kzak@redhat.com: - add missing #ifdefs
- use xalloc.h]
Signed-off-by: Sami Kerola <kerolasa@iki.fi>
Signed-off-by: Karel Zak <kzak@redhat.com>
As the comment in the code says, this method is really only valid
on x86 and x86_64, so add a #ifdef for those architectures around
that code block.
This was causing "Program lscpu tried to access /dev/mem between f0000->100000."
warnings on some ppc64 machines.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Not all file systems support the d_type field and simply checking for
d_type == DT_DIR in is_node_dirent would cause the test suite to fail
if run on (for example) XFS.
The simple fix is to check for DT_DIR or DT_UNKNOWN in is_node_dirent.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
The umount(8) always parses /proc/self/mountinfo to get fstype and to
merge kernel mount options with userspace mount options from
/run/mount/utab. This behavior is overkill in many cases and it's
pretty expensive as kernel has to always compose *whole* mountinfo.
This performance disadvantage is visible for crazy use-cases with huge
number of mountpoints and frequently called umount(8).
It seems that we can bypass /proc/self/mountinfo by statfs() to get
filesystem type (statfs.f_type magic) and analyze /run/mount/utab
before we parse mountinfo.
This optimization is not used when:
* umount(8) executed by non-root (as user= in utab is expected)
* umount --lazy / --force (target is probably unreachable NFS, then
use statfs() is pretty bad idea)
* target is not a directory (e.g. umount /dev/sda1)
* there is (deprecated) writeable mtab
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Karel Zak <kzak@redhat.com>
.travis.yml is used for automatic builds on travis build farm
(https://travis-ci.org/) if the travis service hook is enabled for the
repo on github.
This inital yaml controller will run 2 different compilers (gcc, clang).
The test suite currently fails, that's why we don't abort yet.
Commit b9579f1f44 moved fclose() to
checkf(), but missed removing file closure in magic(). Ironically the
cause of regression is in previous commit message.
Signed-off-by: Sami Kerola <kerolasa@iki.fi>
* don't initialize timingfd (to stderr) when -t not specified
* care about timingfd dooutput() rather in main()
* make timingdf gloval like fscript FILE
* close all in done()
* close irrelevant things in subshell and input processes
Reported-by: Sami Kerola <kerolasa@iki.fi>
Signed-off-by: Karel Zak <kzak@redhat.com>
If both -f and -t are given, flush the timing fd on each write, similar
to the behavior on the script fd. This allows playback of still-running
sessions, and reduces the risk of ending up with empty timing files when
script(1) exits abnormally.
Signed-off-by: Jesper Dahl Nyerup <nyerup@one.com>
Theodore Ts'o:
I'll add that I've never been convinced that the mkfs front end is all
that useful. It's probably better for people to explicitly run
/sbin/mkfs.xfs, /sbin/mkfs.ext4, etc.., so you don't have to worry
about which options get passed down to the file system specific mkfs
program, and which ones are interpreted by /sbin/mkfs --- and I don't
believe /sbin/mkfs adds enough (err, any?) value that using
"/sbin/mkfs -t xxx" vs "/sbin/mkfs.xxx" makes any sense whatsoever.
... and I absolutely agree.
Reported-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Karel Zak <kzak@redhat.com>
Also, slice up the usage text for ease of translation.
Reported-by: Phillip Susi <psusi@ubuntu.com>
Signed-off-by: Benno Schulenberg <bensberg@justemail.net>
It seems that linux 3.14 is able to produce things like:
19 0 8:3 / / rw,relatime - ext4 /dev/sda3 rw,data=ordered
^
Reported-by: Mantas Mikulėnas <grawity@gmail.com>
Signed-off-by: Karel Zak <kzak@redhat.com>
Based on Pádraig Brady review:
* use is_nul() from coreutils rather then memcmp()
* always call skip_hole() (SEEK_DATA)
* fix possible overflows
Signed-off-by: Karel Zak <kzak@redhat.com>
It's more efficient to skip already known holes by SEEK_DATA (seek to
the next area with data).
Thanks to Pádraig Brady.
Signed-off-by: Karel Zak <kzak@redhat.com>