mirror of
https://github.com/FEX-Emu/linux.git
synced 2025-01-12 04:19:08 +00:00
72cca7baf4
Here's the "big" staging/iio pull request for 4.10-rc1. Not as big as 4.9 was, but still just over a thousand changes. We almost broke even of lines added vs. removed, as the slicoss driver was removed (got a "clean" driver for the same hardware through the netdev tree), and some iio drivers were also dropped, but I think we ended up adding a few thousand lines to the source tree in the end. Other than that it's a lot of minor fixes all over the place, nothing major stands out at all. All of these have been in linux-next for a while. There will be a merge conflict with Al's vfs tree in the lustre code, but the resolution for that should be pretty simple, that too has been in linux-next. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWFAk1Q8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ymikACg05b0h/iVTTH18474PXXnzw6jk9IAn0gI2fx9 cqp2MglTvphhrXzddL7V =MeTw -----END PGP SIGNATURE----- Merge tag 'staging-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging Pull staging/IIO updates from Greg KH: "Here's the "big" staging/iio pull request for 4.10-rc1. Not as big as 4.9 was, but still just over a thousand changes. We almost broke even of lines added vs. removed, as the slicoss driver was removed (got a "clean" driver for the same hardware through the netdev tree), and some iio drivers were also dropped, but I think we ended up adding a few thousand lines to the source tree in the end. Other than that it's a lot of minor fixes all over the place, nothing major stands out at all. All of these have been in linux-next for a while. There will be a merge conflict with Al's vfs tree in the lustre code, but the resolution for that should be pretty simple, that too has been in linux-next" * tag 'staging-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging: (1002 commits) staging: comedi: comedidev.h: Document usage of 'detach' handler staging: fsl-mc: remove unnecessary info prints from bus driver staging: fsl-mc: add sysfs ABI doc staging/lustre/o2iblnd: Fix misspelled attemps->attempts staging/lustre/o2iblnd: Fix misspelling intialized->intialized staging/lustre: Convert all bare unsigned to unsigned int staging/lustre/socklnd: Fix whitespace problem staging/lustre/o2iblnd: Add missing space staging/lustre/lnetselftest: Fix potential integer overflow staging: greybus: audio_module: remove redundant OOM message staging: dgnc: Fix lines longer than 80 characters staging: dgnc: fix blank line after '{' warnings. staging/android: remove Sync Framework tasks from TODO staging/lustre/osc: Revert erroneous list_for_each_entry_safe use staging: slicoss: remove the staging driver staging: lustre: libcfs: remove lnet upcall code staging: lustre: remove set but unused variables staging: lustre: osc: set lock data for readahead lock staging: lustre: import: don't reconnect during connect interpret staging: lustre: clio: remove mtime check in vvp_io_fault_start() ...
Lustre Parallel Filesystem Client ================================= The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. Born from from a research project at Carnegie Mellon University, the Lustre file system is a widely-used option in HPC. The Lustre file system provides a POSIX compliant file system interface, can scale to thousands of clients, petabytes of storage and hundreds of gigabytes per second of I/O bandwidth. Unlike shared disk storage cluster filesystems (e.g. OCFS2, GFS, GPFS), Lustre has independent Metadata and Data servers that clients can access in parallel to maximize performance. In order to use Lustre client you will need to download the "lustre-client" package that contains the userspace tools from http://lustre.org/download/ You will need to install and configure your Lustre servers separately. Mount Syntax ============ After you installed the lustre-client tools including mount.lustre binary you can mount your Lustre filesystem with: mount -t lustre mgs:/fsname mnt where mgs is the host name or ip address of your Lustre MGS(management service) fsname is the name of the filesystem you would like to mount. Mount Options ============= noflock Disable posix file locking (Applications trying to use the functionality will get ENOSYS) localflock Enable local flock support, using only client-local flock (faster, for applications that require flock but do not run on multiple nodes). flock Enable cluster-global posix file locking coherent across all client nodes. user_xattr, nouser_xattr Support "user." extended attributes (or not) user_fid2path, nouser_fid2path Enable FID to path translation by regular users (or not) checksum, nochecksum Verify data consistency on the wire and in memory as it passes between the layers (or not). lruresize, nolruresize Allow lock LRU to be controlled by memory pressure on the server (or only 100 (default, controlled by lru_size proc parameter) locks per CPU per server on this client). lazystatfs, nolazystatfs Do not block in statfs() if some of the servers are down. 32bitapi Shrink inode numbers to fit into 32 bits. This is necessary if you plan to reexport Lustre filesystem from this client via NFSv4. verbose, noverbose Enable mount/umount console messages (or not) More Information ================ You can get more information at the Lustre website: http://wiki.lustre.org/ Source for the userspace tools and out-of-tree client and server code is available at: http://git.hpdd.intel.com/fs/lustre-release.git Latest binary packages: http://lustre.org/download/