mirror of
https://github.com/FEX-Emu/linux.git
synced 2024-12-14 12:49:08 +00:00
Merge with /shiny/git/linux-2.6/.git
This commit is contained in:
commit
39299d9d15
@ -41,6 +41,7 @@ COPYING
|
||||
CREDITS
|
||||
CVS
|
||||
ChangeSet
|
||||
Image
|
||||
Kerntypes
|
||||
MODS.txt
|
||||
Module.symvers
|
||||
@ -103,6 +104,7 @@ logo_*.c
|
||||
logo_*_clut224.c
|
||||
logo_*_mono.c
|
||||
lxdialog
|
||||
mach-types.h
|
||||
make_times_h
|
||||
map
|
||||
maui_boot.h
|
||||
|
@ -103,11 +103,11 @@ Who: Jody McIntyre <scjody@steamballoon.com>
|
||||
---------------------------
|
||||
|
||||
What: register_serial/unregister_serial
|
||||
When: December 2005
|
||||
When: September 2005
|
||||
Why: This interface does not allow serial ports to be registered against
|
||||
a struct device, and as such does not allow correct power management
|
||||
of such ports. 8250-based ports should use serial8250_register_port
|
||||
and serial8250_unregister_port instead.
|
||||
and serial8250_unregister_port, or platform devices instead.
|
||||
Who: Russell King <rmk@arm.linux.org.uk>
|
||||
|
||||
---------------------------
|
||||
|
@ -1,18 +1,22 @@
|
||||
inotify
|
||||
a powerful yet simple file change notification system
|
||||
inotify
|
||||
a powerful yet simple file change notification system
|
||||
|
||||
|
||||
|
||||
Document started 15 Mar 2005 by Robert Love <rml@novell.com>
|
||||
|
||||
|
||||
(i) User Interface
|
||||
|
||||
Inotify is controlled by a set of three sys calls
|
||||
Inotify is controlled by a set of three system calls and normal file I/O on a
|
||||
returned file descriptor.
|
||||
|
||||
First step in using inotify is to initialise an inotify instance
|
||||
First step in using inotify is to initialise an inotify instance:
|
||||
|
||||
int fd = inotify_init ();
|
||||
|
||||
Each instance is associated with a unique, ordered queue.
|
||||
|
||||
Change events are managed by "watches". A watch is an (object,mask) pair where
|
||||
the object is a file or directory and the mask is a bit mask of one or more
|
||||
inotify events that the application wishes to receive. See <linux/inotify.h>
|
||||
@ -22,43 +26,52 @@ Watches are added via a path to the file.
|
||||
|
||||
Watches on a directory will return events on any files inside of the directory.
|
||||
|
||||
Adding a watch is simple,
|
||||
Adding a watch is simple:
|
||||
|
||||
int wd = inotify_add_watch (fd, path, mask);
|
||||
|
||||
You can add a large number of files via something like
|
||||
|
||||
for each file to watch {
|
||||
int wd = inotify_add_watch (fd, file, mask);
|
||||
}
|
||||
Where "fd" is the return value from inotify_init(), path is the path to the
|
||||
object to watch, and mask is the watch mask (see <linux/inotify.h>).
|
||||
|
||||
You can update an existing watch in the same manner, by passing in a new mask.
|
||||
|
||||
An existing watch is removed via the INOTIFY_IGNORE ioctl, for example
|
||||
An existing watch is removed via
|
||||
|
||||
inotify_rm_watch (fd, wd);
|
||||
int ret = inotify_rm_watch (fd, wd);
|
||||
|
||||
Events are provided in the form of an inotify_event structure that is read(2)
|
||||
from a inotify instance fd. The filename is of dynamic length and follows the
|
||||
struct. It is of size len. The filename is padded with null bytes to ensure
|
||||
proper alignment. This padding is reflected in len.
|
||||
from a given inotify instance. The filename is of dynamic length and follows
|
||||
the struct. It is of size len. The filename is padded with null bytes to
|
||||
ensure proper alignment. This padding is reflected in len.
|
||||
|
||||
You can slurp multiple events by passing a large buffer, for example
|
||||
|
||||
size_t len = read (fd, buf, BUF_LEN);
|
||||
|
||||
Will return as many events as are available and fit in BUF_LEN.
|
||||
Where "buf" is a pointer to an array of "inotify_event" structures at least
|
||||
BUF_LEN bytes in size. The above example will return as many events as are
|
||||
available and fit in BUF_LEN.
|
||||
|
||||
each inotify instance fd is also select()- and poll()-able.
|
||||
Each inotify instance fd is also select()- and poll()-able.
|
||||
|
||||
You can find the size of the current event queue via the FIONREAD ioctl.
|
||||
You can find the size of the current event queue via the standard FIONREAD
|
||||
ioctl on the fd returned by inotify_init().
|
||||
|
||||
All watches are destroyed and cleaned up on close.
|
||||
|
||||
|
||||
(ii) Internal Kernel Implementation
|
||||
(ii)
|
||||
|
||||
Each open inotify instance is associated with an inotify_device structure.
|
||||
Prototypes:
|
||||
|
||||
int inotify_init (void);
|
||||
int inotify_add_watch (int fd, const char *path, __u32 mask);
|
||||
int inotify_rm_watch (int fd, __u32 mask);
|
||||
|
||||
|
||||
(iii) Internal Kernel Implementation
|
||||
|
||||
Each inotify instance is associated with an inotify_device structure.
|
||||
|
||||
Each watch is associated with an inotify_watch structure. Watches are chained
|
||||
off of each associated device and each associated inode.
|
||||
@ -66,7 +79,7 @@ off of each associated device and each associated inode.
|
||||
See fs/inotify.c for the locking and lifetime rules.
|
||||
|
||||
|
||||
(iii) Rationale
|
||||
(iv) Rationale
|
||||
|
||||
Q: What is the design decision behind not tying the watch to the open fd of
|
||||
the watched object?
|
||||
@ -75,9 +88,9 @@ A: Watches are associated with an open inotify device, not an open file.
|
||||
This solves the primary problem with dnotify: keeping the file open pins
|
||||
the file and thus, worse, pins the mount. Dnotify is therefore infeasible
|
||||
for use on a desktop system with removable media as the media cannot be
|
||||
unmounted.
|
||||
unmounted. Watching a file should not require that it be open.
|
||||
|
||||
Q: What is the design decision behind using an-fd-per-device as opposed to
|
||||
Q: What is the design decision behind using an-fd-per-instance as opposed to
|
||||
an fd-per-watch?
|
||||
|
||||
A: An fd-per-watch quickly consumes more file descriptors than are allowed,
|
||||
@ -86,8 +99,8 @@ A: An fd-per-watch quickly consumes more file descriptors than are allowed,
|
||||
can use epoll, but requiring both is a silly and extraneous requirement.
|
||||
A watch consumes less memory than an open file, separating the number
|
||||
spaces is thus sensible. The current design is what user-space developers
|
||||
want: Users initialize inotify, once, and add n watches, requiring but one fd
|
||||
and no twiddling with fd limits. Initializing an inotify instance two
|
||||
want: Users initialize inotify, once, and add n watches, requiring but one
|
||||
fd and no twiddling with fd limits. Initializing an inotify instance two
|
||||
thousand times is silly. If we can implement user-space's preferences
|
||||
cleanly--and we can, the idr layer makes stuff like this trivial--then we
|
||||
should.
|
||||
@ -111,9 +124,6 @@ A: An fd-per-watch quickly consumes more file descriptors than are allowed,
|
||||
example, love it. Trust me, I asked. It is not a surprise: Who'd want
|
||||
to manage and block on 1000 fd's via select?
|
||||
|
||||
- You'd have to manage the fd's, as an example: Call close() when you
|
||||
received a delete event.
|
||||
|
||||
- No way to get out of band data.
|
||||
|
||||
- 1024 is still too low. ;-)
|
||||
@ -122,6 +132,11 @@ A: An fd-per-watch quickly consumes more file descriptors than are allowed,
|
||||
scales to 1000s of directories, juggling 1000s of fd's just does not seem
|
||||
the right interface. It is too heavy.
|
||||
|
||||
Additionally, it _is_ possible to more than one instance and
|
||||
juggle more than one queue and thus more than one associated fd. There
|
||||
need not be a one-fd-per-process mapping; it is one-fd-per-queue and a
|
||||
process can easily want more than one queue.
|
||||
|
||||
Q: Why the system call approach?
|
||||
|
||||
A: The poor user-space interface is the second biggest problem with dnotify.
|
||||
@ -131,8 +146,6 @@ A: The poor user-space interface is the second biggest problem with dnotify.
|
||||
Obtaining the fd and managing the watches could have been done either via a
|
||||
device file or a family of new system calls. We decided to implement a
|
||||
family of system calls because that is the preffered approach for new kernel
|
||||
features and it means our user interface requirements.
|
||||
|
||||
Additionally, it _is_ possible to more than one instance and
|
||||
juggle more than one queue and thus more than one associated fd.
|
||||
interfaces. The only real difference was whether we wanted to use open(2)
|
||||
and ioctl(2) or a couple of new system calls. System calls beat ioctls.
|
||||
|
||||
|
@ -21,7 +21,7 @@ Overview
|
||||
========
|
||||
|
||||
Linux-NTFS comes with a number of user-space programs known as ntfsprogs.
|
||||
These include mkntfs, a full-featured ntfs file system format utility,
|
||||
These include mkntfs, a full-featured ntfs filesystem format utility,
|
||||
ntfsundelete used for recovering files that were unintentionally deleted
|
||||
from an NTFS volume and ntfsresize which is used to resize an NTFS partition.
|
||||
See the web site for more information.
|
||||
@ -149,7 +149,14 @@ case_sensitive=<BOOL> If case_sensitive is specified, treat all file names as
|
||||
name, if it exists. If case_sensitive, you will need
|
||||
to provide the correct case of the short file name.
|
||||
|
||||
errors=opt What to do when critical file system errors are found.
|
||||
disable_sparse=<BOOL> If disable_sparse is specified, creation of sparse
|
||||
regions, i.e. holes, inside files is disabled for the
|
||||
volume (for the duration of this mount only). By
|
||||
default, creation of sparse regions is enabled, which
|
||||
is consistent with the behaviour of traditional Unix
|
||||
filesystems.
|
||||
|
||||
errors=opt What to do when critical filesystem errors are found.
|
||||
Following values can be used for "opt":
|
||||
continue: DEFAULT, try to clean-up as much as
|
||||
possible, e.g. marking a corrupt inode as
|
||||
@ -432,6 +439,24 @@ ChangeLog
|
||||
|
||||
Note, a technical ChangeLog aimed at kernel hackers is in fs/ntfs/ChangeLog.
|
||||
|
||||
2.1.23:
|
||||
- Stamp the user space journal, aka transaction log, aka $UsnJrnl, if
|
||||
it is present and active thus telling Windows and applications using
|
||||
the transaction log that changes can have happened on the volume
|
||||
which are not recorded in $UsnJrnl.
|
||||
- Detect the case when Windows has been hibernated (suspended to disk)
|
||||
and if this is the case do not allow (re)mounting read-write to
|
||||
prevent data corruption when you boot back into the suspended
|
||||
Windows session.
|
||||
- Implement extension of resident files using the normal file write
|
||||
code paths, i.e. most very small files can be extended to be a little
|
||||
bit bigger but not by much.
|
||||
- Add new mount option "disable_sparse". (See list of mount options
|
||||
above for details.)
|
||||
- Improve handling of ntfs volumes with errors and strange boot sectors
|
||||
in particular.
|
||||
- Fix various bugs including a nasty deadlock that appeared in recent
|
||||
kernels (around 2.6.11-2.6.12 timeframe).
|
||||
2.1.22:
|
||||
- Improve handling of ntfs volumes with errors.
|
||||
- Fix various bugs and race conditions.
|
||||
|
12
MAINTAINERS
12
MAINTAINERS
@ -1172,6 +1172,12 @@ L: linux-input@atrey.karlin.mff.cuni.cz
|
||||
L: linux-joystick@atrey.karlin.mff.cuni.cz
|
||||
S: Maintained
|
||||
|
||||
INOTIFY
|
||||
P: John McCutchan and Robert Love
|
||||
M: ttb@tentacle.dhs.org and rml@novell.com
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Maintained
|
||||
|
||||
INTEL 810/815 FRAMEBUFFER DRIVER
|
||||
P: Antonino Daplas
|
||||
M: adaplas@pol.net
|
||||
@ -2423,6 +2429,12 @@ L: linux-usb-users@lists.sourceforge.net
|
||||
L: linux-usb-devel@lists.sourceforge.net
|
||||
S: Maintained
|
||||
|
||||
USB OPTION-CARD DRIVER
|
||||
P: Matthias Urlichs
|
||||
M: smurf@smurf.noris.de
|
||||
L: linux-usb-devel@lists.sourceforge.net
|
||||
S: Maintained
|
||||
|
||||
USB OV511 DRIVER
|
||||
P: Mark McClelland
|
||||
M: mmcclell@bigfoot.com
|
||||
|
@ -1,22 +1,21 @@
|
||||
#
|
||||
# Automatically generated make config: don't edit
|
||||
# Linux kernel version: 2.6.12-rc1-bk2
|
||||
# Sun Mar 27 23:59:14 2005
|
||||
# Linux kernel version: 2.6.12-git3
|
||||
# Sat Jul 16 15:21:47 2005
|
||||
#
|
||||
CONFIG_ARM=y
|
||||
CONFIG_MMU=y
|
||||
CONFIG_UID16=y
|
||||
CONFIG_RWSEM_GENERIC_SPINLOCK=y
|
||||
CONFIG_GENERIC_CALIBRATE_DELAY=y
|
||||
CONFIG_GENERIC_IOMAP=y
|
||||
|
||||
#
|
||||
# Code maturity level options
|
||||
#
|
||||
CONFIG_EXPERIMENTAL=y
|
||||
# CONFIG_CLEAN_COMPILE is not set
|
||||
CONFIG_BROKEN=y
|
||||
CONFIG_CLEAN_COMPILE=y
|
||||
CONFIG_BROKEN_ON_SMP=y
|
||||
CONFIG_INIT_ENV_ARG_LIMIT=32
|
||||
|
||||
#
|
||||
# General setup
|
||||
@ -33,7 +32,10 @@ CONFIG_KOBJECT_UEVENT=y
|
||||
# CONFIG_IKCONFIG is not set
|
||||
# CONFIG_EMBEDDED is not set
|
||||
CONFIG_KALLSYMS=y
|
||||
# CONFIG_KALLSYMS_ALL is not set
|
||||
# CONFIG_KALLSYMS_EXTRA_PASS is not set
|
||||
CONFIG_PRINTK=y
|
||||
CONFIG_BUG=y
|
||||
CONFIG_BASE_FULL=y
|
||||
CONFIG_FUTEX=y
|
||||
CONFIG_EPOLL=y
|
||||
@ -81,6 +83,7 @@ CONFIG_ARCH_SHARK=y
|
||||
# CONFIG_ARCH_VERSATILE is not set
|
||||
# CONFIG_ARCH_IMX is not set
|
||||
# CONFIG_ARCH_H720X is not set
|
||||
# CONFIG_ARCH_AAEC2000 is not set
|
||||
|
||||
#
|
||||
# Processor Type
|
||||
@ -103,10 +106,12 @@ CONFIG_CPU_TLB_V4WB=y
|
||||
#
|
||||
CONFIG_ISA=y
|
||||
CONFIG_ISA_DMA=y
|
||||
CONFIG_ISA_DMA_API=y
|
||||
CONFIG_PCI=y
|
||||
CONFIG_PCI_HOST_VIA82C505=y
|
||||
CONFIG_PCI_LEGACY_PROC=y
|
||||
# CONFIG_PCI_NAMES is not set
|
||||
# CONFIG_PCI_DEBUG is not set
|
||||
|
||||
#
|
||||
# PCCARD (PCMCIA/CardBus) support
|
||||
@ -116,7 +121,9 @@ CONFIG_PCI_LEGACY_PROC=y
|
||||
#
|
||||
# Kernel Features
|
||||
#
|
||||
# CONFIG_SMP is not set
|
||||
# CONFIG_PREEMPT is not set
|
||||
# CONFIG_DISCONTIGMEM is not set
|
||||
CONFIG_LEDS=y
|
||||
CONFIG_LEDS_TIMER=y
|
||||
# CONFIG_LEDS_CPU is not set
|
||||
@ -163,6 +170,7 @@ CONFIG_BINFMT_ELF=y
|
||||
# CONFIG_STANDALONE is not set
|
||||
CONFIG_PREVENT_FIRMWARE_BUILD=y
|
||||
# CONFIG_FW_LOADER is not set
|
||||
# CONFIG_DEBUG_DRIVER is not set
|
||||
|
||||
#
|
||||
# Memory Technology Devices (MTD)
|
||||
@ -172,8 +180,8 @@ CONFIG_PREVENT_FIRMWARE_BUILD=y
|
||||
#
|
||||
# Parallel port support
|
||||
#
|
||||
CONFIG_PARPORT=y
|
||||
CONFIG_PARPORT_PC=y
|
||||
CONFIG_PARPORT=m
|
||||
CONFIG_PARPORT_PC=m
|
||||
# CONFIG_PARPORT_SERIAL is not set
|
||||
# CONFIG_PARPORT_PC_FIFO is not set
|
||||
# CONFIG_PARPORT_PC_SUPERIO is not set
|
||||
@ -189,7 +197,6 @@ CONFIG_PARPORT_PC=y
|
||||
#
|
||||
# Block devices
|
||||
#
|
||||
# CONFIG_BLK_DEV_FD is not set
|
||||
# CONFIG_BLK_DEV_XD is not set
|
||||
# CONFIG_PARIDE is not set
|
||||
# CONFIG_BLK_CPQ_DA is not set
|
||||
@ -229,7 +236,7 @@ CONFIG_BLK_DEV_IDE=y
|
||||
# CONFIG_BLK_DEV_IDE_SATA is not set
|
||||
CONFIG_BLK_DEV_IDEDISK=y
|
||||
# CONFIG_IDEDISK_MULTI_MODE is not set
|
||||
CONFIG_BLK_DEV_IDECD=y
|
||||
CONFIG_BLK_DEV_IDECD=m
|
||||
# CONFIG_BLK_DEV_IDETAPE is not set
|
||||
CONFIG_BLK_DEV_IDEFLOPPY=y
|
||||
# CONFIG_BLK_DEV_IDESCSI is not set
|
||||
@ -261,6 +268,7 @@ CONFIG_CHR_DEV_ST=m
|
||||
CONFIG_BLK_DEV_SR=m
|
||||
# CONFIG_BLK_DEV_SR_VENDOR is not set
|
||||
CONFIG_CHR_DEV_SG=m
|
||||
# CONFIG_CHR_DEV_SCH is not set
|
||||
|
||||
#
|
||||
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
|
||||
@ -290,17 +298,14 @@ CONFIG_CHR_DEV_SG=m
|
||||
# CONFIG_SCSI_AIC7XXX_OLD is not set
|
||||
# CONFIG_SCSI_AIC79XX is not set
|
||||
# CONFIG_SCSI_DPT_I2O is not set
|
||||
# CONFIG_SCSI_ADVANSYS is not set
|
||||
# CONFIG_SCSI_IN2000 is not set
|
||||
# CONFIG_MEGARAID_NEWGEN is not set
|
||||
# CONFIG_MEGARAID_LEGACY is not set
|
||||
# CONFIG_SCSI_SATA is not set
|
||||
# CONFIG_SCSI_BUSLOGIC is not set
|
||||
# CONFIG_SCSI_CPQFCTS is not set
|
||||
# CONFIG_SCSI_DMX3191D is not set
|
||||
# CONFIG_SCSI_DTC3280 is not set
|
||||
# CONFIG_SCSI_EATA is not set
|
||||
# CONFIG_SCSI_EATA_PIO is not set
|
||||
# CONFIG_SCSI_FUTURE_DOMAIN is not set
|
||||
# CONFIG_SCSI_GDTH is not set
|
||||
# CONFIG_SCSI_GENERIC_NCR5380 is not set
|
||||
@ -314,11 +319,8 @@ CONFIG_CHR_DEV_SG=m
|
||||
# CONFIG_SCSI_SYM53C8XX_2 is not set
|
||||
# CONFIG_SCSI_IPR is not set
|
||||
# CONFIG_SCSI_PAS16 is not set
|
||||
# CONFIG_SCSI_PCI2000 is not set
|
||||
# CONFIG_SCSI_PCI2220I is not set
|
||||
# CONFIG_SCSI_PSI240I is not set
|
||||
# CONFIG_SCSI_QLOGIC_FAS is not set
|
||||
# CONFIG_SCSI_QLOGIC_ISP is not set
|
||||
# CONFIG_SCSI_QLOGIC_FC is not set
|
||||
# CONFIG_SCSI_QLOGIC_1280 is not set
|
||||
CONFIG_SCSI_QLA2XXX=m
|
||||
@ -327,6 +329,7 @@ CONFIG_SCSI_QLA2XXX=m
|
||||
# CONFIG_SCSI_QLA2300 is not set
|
||||
# CONFIG_SCSI_QLA2322 is not set
|
||||
# CONFIG_SCSI_QLA6312 is not set
|
||||
# CONFIG_SCSI_LPFC is not set
|
||||
# CONFIG_SCSI_SYM53C416 is not set
|
||||
# CONFIG_SCSI_DC395x is not set
|
||||
# CONFIG_SCSI_DC390T is not set
|
||||
@ -344,6 +347,8 @@ CONFIG_SCSI_QLA2XXX=m
|
||||
# Fusion MPT device support
|
||||
#
|
||||
# CONFIG_FUSION is not set
|
||||
# CONFIG_FUSION_SPI is not set
|
||||
# CONFIG_FUSION_FC is not set
|
||||
|
||||
#
|
||||
# IEEE 1394 (FireWire) support
|
||||
@ -365,7 +370,6 @@ CONFIG_NET=y
|
||||
#
|
||||
CONFIG_PACKET=y
|
||||
# CONFIG_PACKET_MMAP is not set
|
||||
# CONFIG_NETLINK_DEV is not set
|
||||
CONFIG_UNIX=y
|
||||
# CONFIG_NET_KEY is not set
|
||||
CONFIG_INET=y
|
||||
@ -380,7 +384,7 @@ CONFIG_INET=y
|
||||
# CONFIG_INET_ESP is not set
|
||||
# CONFIG_INET_IPCOMP is not set
|
||||
# CONFIG_INET_TUNNEL is not set
|
||||
# CONFIG_IP_TCPDIAG is not set
|
||||
CONFIG_IP_TCPDIAG=y
|
||||
# CONFIG_IP_TCPDIAG_IPV6 is not set
|
||||
# CONFIG_IPV6 is not set
|
||||
# CONFIG_NETFILTER is not set
|
||||
@ -439,6 +443,7 @@ CONFIG_NET_ETHERNET=y
|
||||
# CONFIG_LANCE is not set
|
||||
# CONFIG_NET_VENDOR_SMC is not set
|
||||
# CONFIG_SMC91X is not set
|
||||
# CONFIG_DM9000 is not set
|
||||
# CONFIG_NET_VENDOR_RACAL is not set
|
||||
|
||||
#
|
||||
@ -483,9 +488,11 @@ CONFIG_CS89x0=y
|
||||
# CONFIG_HAMACHI is not set
|
||||
# CONFIG_YELLOWFIN is not set
|
||||
# CONFIG_R8169 is not set
|
||||
# CONFIG_SKGE is not set
|
||||
# CONFIG_SK98LIN is not set
|
||||
# CONFIG_VIA_VELOCITY is not set
|
||||
# CONFIG_TIGON3 is not set
|
||||
# CONFIG_BNX2 is not set
|
||||
|
||||
#
|
||||
# Ethernet (10000 Mbit)
|
||||
@ -569,7 +576,6 @@ CONFIG_SERIO_I8042=y
|
||||
CONFIG_SERIO_LIBPS2=y
|
||||
# CONFIG_SERIO_RAW is not set
|
||||
# CONFIG_GAMEPORT is not set
|
||||
CONFIG_SOUND_GAMEPORT=y
|
||||
|
||||
#
|
||||
# Character devices
|
||||
@ -592,6 +598,7 @@ CONFIG_SERIAL_8250_NR_UARTS=4
|
||||
#
|
||||
CONFIG_SERIAL_CORE=y
|
||||
CONFIG_SERIAL_CORE_CONSOLE=y
|
||||
# CONFIG_SERIAL_JSM is not set
|
||||
CONFIG_UNIX98_PTYS=y
|
||||
CONFIG_LEGACY_PTYS=y
|
||||
CONFIG_LEGACY_PTY_COUNT=256
|
||||
@ -653,6 +660,7 @@ CONFIG_FB_CFB_FILLRECT=y
|
||||
CONFIG_FB_CFB_COPYAREA=y
|
||||
CONFIG_FB_CFB_IMAGEBLIT=y
|
||||
CONFIG_FB_SOFT_CURSOR=y
|
||||
# CONFIG_FB_MACMODES is not set
|
||||
# CONFIG_FB_MODE_HELPERS is not set
|
||||
# CONFIG_FB_TILEBLITTING is not set
|
||||
# CONFIG_FB_CIRRUS is not set
|
||||
@ -674,7 +682,7 @@ CONFIG_FB_CYBER2000=y
|
||||
# CONFIG_FB_3DFX is not set
|
||||
# CONFIG_FB_VOODOO1 is not set
|
||||
# CONFIG_FB_TRIDENT is not set
|
||||
# CONFIG_FB_PM3 is not set
|
||||
# CONFIG_FB_S1D13XXX is not set
|
||||
# CONFIG_FB_VIRTUAL is not set
|
||||
|
||||
#
|
||||
@ -808,7 +816,7 @@ CONFIG_DNOTIFY=y
|
||||
#
|
||||
# CD-ROM/DVD Filesystems
|
||||
#
|
||||
CONFIG_ISO9660_FS=y
|
||||
CONFIG_ISO9660_FS=m
|
||||
CONFIG_JOLIET=y
|
||||
# CONFIG_ZISOFS is not set
|
||||
# CONFIG_UDF_FS is not set
|
||||
@ -816,9 +824,9 @@ CONFIG_JOLIET=y
|
||||
#
|
||||
# DOS/FAT/NT Filesystems
|
||||
#
|
||||
CONFIG_FAT_FS=y
|
||||
CONFIG_MSDOS_FS=y
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_FAT_FS=m
|
||||
CONFIG_MSDOS_FS=m
|
||||
CONFIG_VFAT_FS=m
|
||||
CONFIG_FAT_DEFAULT_CODEPAGE=437
|
||||
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
|
||||
# CONFIG_NTFS_FS is not set
|
||||
@ -833,7 +841,6 @@ CONFIG_DEVFS_MOUNT=y
|
||||
# CONFIG_DEVFS_DEBUG is not set
|
||||
# CONFIG_DEVPTS_FS_XATTR is not set
|
||||
# CONFIG_TMPFS is not set
|
||||
# CONFIG_HUGETLBFS is not set
|
||||
# CONFIG_HUGETLB_PAGE is not set
|
||||
CONFIG_RAMFS=y
|
||||
|
||||
@ -857,13 +864,14 @@ CONFIG_RAMFS=y
|
||||
#
|
||||
# Network File Systems
|
||||
#
|
||||
CONFIG_NFS_FS=y
|
||||
# CONFIG_NFS_V3 is not set
|
||||
CONFIG_NFS_FS=m
|
||||
CONFIG_NFS_V3=y
|
||||
# CONFIG_NFS_V4 is not set
|
||||
# CONFIG_NFS_DIRECTIO is not set
|
||||
# CONFIG_NFSD is not set
|
||||
CONFIG_LOCKD=y
|
||||
CONFIG_SUNRPC=y
|
||||
CONFIG_LOCKD=m
|
||||
CONFIG_LOCKD_V4=y
|
||||
CONFIG_SUNRPC=m
|
||||
# CONFIG_RPCSEC_GSS_KRB5 is not set
|
||||
# CONFIG_RPCSEC_GSS_SPKM3 is not set
|
||||
# CONFIG_SMB_FS is not set
|
||||
@ -895,12 +903,12 @@ CONFIG_MSDOS_PARTITION=y
|
||||
#
|
||||
# Native Language Support
|
||||
#
|
||||
CONFIG_NLS=y
|
||||
CONFIG_NLS=m
|
||||
CONFIG_NLS_DEFAULT="iso8859-1"
|
||||
CONFIG_NLS_CODEPAGE_437=y
|
||||
CONFIG_NLS_CODEPAGE_437=m
|
||||
# CONFIG_NLS_CODEPAGE_737 is not set
|
||||
# CONFIG_NLS_CODEPAGE_775 is not set
|
||||
CONFIG_NLS_CODEPAGE_850=y
|
||||
CONFIG_NLS_CODEPAGE_850=m
|
||||
# CONFIG_NLS_CODEPAGE_852 is not set
|
||||
# CONFIG_NLS_CODEPAGE_855 is not set
|
||||
# CONFIG_NLS_CODEPAGE_857 is not set
|
||||
@ -921,7 +929,7 @@ CONFIG_NLS_CODEPAGE_850=y
|
||||
# CONFIG_NLS_CODEPAGE_1250 is not set
|
||||
# CONFIG_NLS_CODEPAGE_1251 is not set
|
||||
# CONFIG_NLS_ASCII is not set
|
||||
CONFIG_NLS_ISO8859_1=y
|
||||
CONFIG_NLS_ISO8859_1=m
|
||||
# CONFIG_NLS_ISO8859_2 is not set
|
||||
# CONFIG_NLS_ISO8859_3 is not set
|
||||
# CONFIG_NLS_ISO8859_4 is not set
|
||||
@ -945,11 +953,22 @@ CONFIG_NLS_ISO8859_1=y
|
||||
# Kernel hacking
|
||||
#
|
||||
# CONFIG_PRINTK_TIME is not set
|
||||
# CONFIG_DEBUG_KERNEL is not set
|
||||
CONFIG_DEBUG_KERNEL=y
|
||||
# CONFIG_MAGIC_SYSRQ is not set
|
||||
CONFIG_LOG_BUF_SHIFT=14
|
||||
# CONFIG_SCHEDSTATS is not set
|
||||
# CONFIG_DEBUG_SLAB is not set
|
||||
# CONFIG_DEBUG_SPINLOCK is not set
|
||||
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
|
||||
# CONFIG_DEBUG_KOBJECT is not set
|
||||
CONFIG_DEBUG_BUGVERBOSE=y
|
||||
# CONFIG_DEBUG_INFO is not set
|
||||
# CONFIG_DEBUG_FS is not set
|
||||
CONFIG_FRAME_POINTER=y
|
||||
CONFIG_DEBUG_USER=y
|
||||
# CONFIG_DEBUG_WAITQ is not set
|
||||
# CONFIG_DEBUG_ERRORS is not set
|
||||
# CONFIG_DEBUG_LL is not set
|
||||
|
||||
#
|
||||
# Security options
|
||||
|
@ -36,7 +36,7 @@
|
||||
* The present bitmask indicates that the CPU is physically present.
|
||||
* The online bitmask indicates that the CPU is up and running.
|
||||
*/
|
||||
cpumask_t cpu_present_mask;
|
||||
cpumask_t cpu_possible_map;
|
||||
cpumask_t cpu_online_map;
|
||||
|
||||
/*
|
||||
@ -235,7 +235,8 @@ void __init smp_prepare_boot_cpu(void)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
||||
cpu_set(cpu, cpu_present_mask);
|
||||
cpu_set(cpu, cpu_possible_map);
|
||||
cpu_set(cpu, cpu_present_map);
|
||||
cpu_set(cpu, cpu_online_map);
|
||||
}
|
||||
|
||||
@ -355,7 +356,7 @@ void show_ipi_list(struct seq_file *p)
|
||||
|
||||
seq_puts(p, "IPI:");
|
||||
|
||||
for_each_online_cpu(cpu)
|
||||
for_each_present_cpu(cpu)
|
||||
seq_printf(p, " %10lu", per_cpu(ipi_data, cpu).ipi_count);
|
||||
|
||||
seq_putc(p, '\n');
|
||||
|
@ -248,16 +248,20 @@ static DEFINE_SPINLOCK(undef_lock);
|
||||
|
||||
void register_undef_hook(struct undef_hook *hook)
|
||||
{
|
||||
spin_lock_irq(&undef_lock);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&undef_lock, flags);
|
||||
list_add(&hook->node, &undef_hook);
|
||||
spin_unlock_irq(&undef_lock);
|
||||
spin_unlock_irqrestore(&undef_lock, flags);
|
||||
}
|
||||
|
||||
void unregister_undef_hook(struct undef_hook *hook)
|
||||
{
|
||||
spin_lock_irq(&undef_lock);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&undef_lock, flags);
|
||||
list_del(&hook->node);
|
||||
spin_unlock_irq(&undef_lock);
|
||||
spin_unlock_irqrestore(&undef_lock, flags);
|
||||
}
|
||||
|
||||
asmlinkage void do_undefinstr(struct pt_regs *regs)
|
||||
|
@ -1,3 +1,33 @@
|
||||
#if __LINUX_ARM_ARCH__ >= 6
|
||||
.macro bitop, instr
|
||||
mov r2, #1
|
||||
and r3, r0, #7 @ Get bit offset
|
||||
add r1, r1, r0, lsr #3 @ Get byte offset
|
||||
mov r3, r2, lsl r3
|
||||
1: ldrexb r2, [r1]
|
||||
\instr r2, r2, r3
|
||||
strexb r0, r2, [r1]
|
||||
cmpne r0, #0
|
||||
bne 1b
|
||||
mov pc, lr
|
||||
.endm
|
||||
|
||||
.macro testop, instr, store
|
||||
and r3, r0, #7 @ Get bit offset
|
||||
mov r2, #1
|
||||
add r1, r1, r0, lsr #3 @ Get byte offset
|
||||
mov r3, r2, lsl r3 @ create mask
|
||||
1: ldrexb r2, [r1]
|
||||
ands r0, r2, r3 @ save old value of bit
|
||||
\instr ip, r2, r3 @ toggle bit
|
||||
strexb r2, ip, [r1]
|
||||
cmp r2, #0
|
||||
bne 1b
|
||||
cmp r0, #0
|
||||
movne r0, #1
|
||||
2: mov pc, lr
|
||||
.endm
|
||||
#else
|
||||
.macro bitop, instr
|
||||
and r2, r0, #7
|
||||
mov r3, #1
|
||||
@ -31,3 +61,4 @@
|
||||
moveq r0, #0
|
||||
mov pc, lr
|
||||
.endm
|
||||
#endif
|
||||
|
@ -11,73 +11,3 @@
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include <asm/io.h>
|
||||
|
||||
void print_warning(void)
|
||||
{
|
||||
printk(KERN_WARNING "ins?/outs? not implemented on this architecture\n");
|
||||
}
|
||||
|
||||
void insl(unsigned int port, void *to, int len)
|
||||
{
|
||||
print_warning();
|
||||
}
|
||||
|
||||
void insb(unsigned int port, void *to, int len)
|
||||
{
|
||||
print_warning();
|
||||
}
|
||||
|
||||
void outsl(unsigned int port, const void *from, int len)
|
||||
{
|
||||
print_warning();
|
||||
}
|
||||
|
||||
void outsb(unsigned int port, const void *from, int len)
|
||||
{
|
||||
print_warning();
|
||||
}
|
||||
|
||||
/* these should be in assembler again */
|
||||
|
||||
/*
|
||||
* Purpose: read a block of data from a hardware register to memory.
|
||||
* Proto : insw(int from_port, void *to, int len_in_words);
|
||||
* Proto : inswb(int from_port, void *to, int len_in_bytes);
|
||||
* Notes : increment to
|
||||
*/
|
||||
|
||||
void insw(unsigned int port, void *to, int len)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < len; i++)
|
||||
((unsigned short *) to)[i] = inw(port);
|
||||
}
|
||||
|
||||
void inswb(unsigned int port, void *to, int len)
|
||||
{
|
||||
insw(port, to, len >> 2);
|
||||
}
|
||||
|
||||
/*
|
||||
* Purpose: write a block of data from memory to a hardware register.
|
||||
* Proto : outsw(int to_reg, void *from, int len_in_words);
|
||||
* Proto : outswb(int to_reg, void *from, int len_in_bytes);
|
||||
* Notes : increments from
|
||||
*/
|
||||
|
||||
void outsw(unsigned int port, const void *from, int len)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < len; i++)
|
||||
outw(((unsigned short *) from)[i], port);
|
||||
}
|
||||
|
||||
void outswb(unsigned int port, const void *from, int len)
|
||||
{
|
||||
outsw(port, from, len >> 2);
|
||||
}
|
||||
|
@ -174,11 +174,13 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
|
||||
max_cpus = ncores;
|
||||
|
||||
/*
|
||||
* Initialise the present mask - this tells us which CPUs should
|
||||
* be present.
|
||||
* Initialise the possible/present maps.
|
||||
* cpu_possible_map describes the set of CPUs which may be present
|
||||
* cpu_present_map describes the set of CPUs populated
|
||||
*/
|
||||
for (i = 0; i < max_cpus; i++) {
|
||||
cpu_set(i, cpu_present_mask);
|
||||
cpu_set(i, cpu_possible_map);
|
||||
cpu_set(i, cpu_present_map);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -13,7 +13,6 @@
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel_stat.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
#include <asm/io.h>
|
||||
#include <asm/hardware.h>
|
||||
|
@ -24,7 +24,7 @@ static struct plat_serial8250_port serial_platform_data[] = {
|
||||
.iobase = 0x3f8,
|
||||
.irq = 4,
|
||||
.uartclk = 1843200,
|
||||
.regshift = 2,
|
||||
.regshift = 0,
|
||||
.iotype = UPIO_PORT,
|
||||
.flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
|
||||
},
|
||||
@ -32,7 +32,7 @@ static struct plat_serial8250_port serial_platform_data[] = {
|
||||
.iobase = 0x2f8,
|
||||
.irq = 3,
|
||||
.uartclk = 1843200,
|
||||
.regshift = 2,
|
||||
.regshift = 0,
|
||||
.iotype = UPIO_PORT,
|
||||
.flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST,
|
||||
},
|
||||
|
@ -24,7 +24,6 @@
|
||||
#include "fpa11.h"
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/version.h>
|
||||
#include <linux/config.h>
|
||||
|
||||
/* XXX */
|
||||
|
@ -25,7 +25,6 @@
|
||||
|
||||
#include <linux/config.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/kernel.h>
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include "piix4.h"
|
||||
|
||||
void (*pm_power_off)(void);
|
||||
EXPORT_SYMBOL(pm_power_off);
|
||||
|
||||
void machine_restart(char * __unused)
|
||||
{
|
||||
|
@ -36,6 +36,7 @@
|
||||
* Power off function, if any
|
||||
*/
|
||||
void (*pm_power_off)(void);
|
||||
EXPORT_SYMBOL(pm_power_off);
|
||||
|
||||
int voyager_level = 0;
|
||||
|
||||
|
@ -10,6 +10,7 @@
|
||||
* the voyager hal to provide the functionality
|
||||
*/
|
||||
#include <linux/config.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/kernel_stat.h>
|
||||
#include <linux/delay.h>
|
||||
@ -40,6 +41,7 @@ static unsigned long cpu_irq_affinity[NR_CPUS] __cacheline_aligned = { [0 ... NR
|
||||
/* per CPU data structure (for /proc/cpuinfo et al), visible externally
|
||||
* indexed physically */
|
||||
struct cpuinfo_x86 cpu_data[NR_CPUS] __cacheline_aligned;
|
||||
EXPORT_SYMBOL(cpu_data);
|
||||
|
||||
/* physical ID of the CPU used to boot the system */
|
||||
unsigned char boot_cpu_id;
|
||||
@ -72,6 +74,7 @@ static volatile unsigned long smp_invalidate_needed;
|
||||
/* Bitmask of currently online CPUs - used by setup.c for
|
||||
/proc/cpuinfo, visible externally but still physical */
|
||||
cpumask_t cpu_online_map = CPU_MASK_NONE;
|
||||
EXPORT_SYMBOL(cpu_online_map);
|
||||
|
||||
/* Bitmask of CPUs present in the system - exported by i386_syms.c, used
|
||||
* by scheduler but indexed physically */
|
||||
@ -238,6 +241,7 @@ static cpumask_t smp_commenced_mask = CPU_MASK_NONE;
|
||||
/* This is for the new dynamic CPU boot code */
|
||||
cpumask_t cpu_callin_map = CPU_MASK_NONE;
|
||||
cpumask_t cpu_callout_map = CPU_MASK_NONE;
|
||||
EXPORT_SYMBOL(cpu_callout_map);
|
||||
|
||||
/* The per processor IRQ masks (these are usually kept in sync) */
|
||||
static __u16 vic_irq_mask[NR_CPUS] __cacheline_aligned;
|
||||
@ -978,6 +982,7 @@ void flush_tlb_page(struct vm_area_struct * vma, unsigned long va)
|
||||
|
||||
preempt_enable();
|
||||
}
|
||||
EXPORT_SYMBOL(flush_tlb_page);
|
||||
|
||||
/* enable the requested IRQs */
|
||||
static void
|
||||
@ -1109,6 +1114,7 @@ smp_call_function (void (*func) (void *info), void *info, int retry,
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(smp_call_function);
|
||||
|
||||
/* Sorry about the name. In an APIC based system, the APICs
|
||||
* themselves are programmed to send a timer interrupt. This is used
|
||||
|
@ -220,13 +220,6 @@ config IOSAPIC
|
||||
depends on !IA64_HP_SIM
|
||||
default y
|
||||
|
||||
config IA64_SGI_SN_SIM
|
||||
bool "SGI Medusa Simulator Support"
|
||||
depends on IA64_SGI_SN2 || IA64_GENERIC
|
||||
help
|
||||
If you are compiling a kernel that will run under SGI's IA-64
|
||||
simulator (Medusa) then say Y, otherwise say N.
|
||||
|
||||
config IA64_SGI_SN_XP
|
||||
tristate "Support communication between SGI SSIs"
|
||||
select IA64_UNCACHED_ALLOCATOR
|
||||
|
@ -81,7 +81,6 @@ CONFIG_HOLES_IN_ZONE=y
|
||||
CONFIG_ARCH_DISCONTIGMEM_ENABLE=y
|
||||
# CONFIG_IA64_CYCLONE is not set
|
||||
CONFIG_IOSAPIC=y
|
||||
CONFIG_IA64_SGI_SN_SIM=y
|
||||
CONFIG_FORCE_MAX_ZONEORDER=18
|
||||
CONFIG_SMP=y
|
||||
CONFIG_NR_CPUS=512
|
||||
|
@ -20,6 +20,7 @@
|
||||
* 02/01/00 R.Seth fixed get_cpuinfo for SMP
|
||||
* 01/07/99 S.Eranian added the support for command line argument
|
||||
* 06/24/99 W.Drummond added boot_cpu_data.
|
||||
* 05/28/05 Z. Menyhart Dynamic stride size for "flush_icache_range()"
|
||||
*/
|
||||
#include <linux/config.h>
|
||||
#include <linux/module.h>
|
||||
@ -84,6 +85,13 @@ struct io_space io_space[MAX_IO_SPACES];
|
||||
EXPORT_SYMBOL(io_space);
|
||||
unsigned int num_io_spaces;
|
||||
|
||||
/*
|
||||
* "flush_icache_range()" needs to know what processor dependent stride size to use
|
||||
* when it makes i-cache(s) coherent with d-caches.
|
||||
*/
|
||||
#define I_CACHE_STRIDE_SHIFT 5 /* Safest way to go: 32 bytes by 32 bytes */
|
||||
unsigned long ia64_i_cache_stride_shift = ~0;
|
||||
|
||||
/*
|
||||
* The merge_mask variable needs to be set to (max(iommu_page_size(iommu)) - 1). This
|
||||
* mask specifies a mask of address bits that must be 0 in order for two buffers to be
|
||||
@ -628,6 +636,12 @@ setup_per_cpu_areas (void)
|
||||
/* start_kernel() requires this... */
|
||||
}
|
||||
|
||||
/*
|
||||
* Calculate the max. cache line size.
|
||||
*
|
||||
* In addition, the minimum of the i-cache stride sizes is calculated for
|
||||
* "flush_icache_range()".
|
||||
*/
|
||||
static void
|
||||
get_max_cacheline_size (void)
|
||||
{
|
||||
@ -641,6 +655,8 @@ get_max_cacheline_size (void)
|
||||
printk(KERN_ERR "%s: ia64_pal_cache_summary() failed (status=%ld)\n",
|
||||
__FUNCTION__, status);
|
||||
max = SMP_CACHE_BYTES;
|
||||
/* Safest setup for "flush_icache_range()" */
|
||||
ia64_i_cache_stride_shift = I_CACHE_STRIDE_SHIFT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -649,14 +665,31 @@ get_max_cacheline_size (void)
|
||||
&cci);
|
||||
if (status != 0) {
|
||||
printk(KERN_ERR
|
||||
"%s: ia64_pal_cache_config_info(l=%lu) failed (status=%ld)\n",
|
||||
"%s: ia64_pal_cache_config_info(l=%lu, 2) failed (status=%ld)\n",
|
||||
__FUNCTION__, l, status);
|
||||
max = SMP_CACHE_BYTES;
|
||||
/* The safest setup for "flush_icache_range()" */
|
||||
cci.pcci_stride = I_CACHE_STRIDE_SHIFT;
|
||||
cci.pcci_unified = 1;
|
||||
}
|
||||
line_size = 1 << cci.pcci_line_size;
|
||||
if (line_size > max)
|
||||
max = line_size;
|
||||
}
|
||||
if (!cci.pcci_unified) {
|
||||
status = ia64_pal_cache_config_info(l,
|
||||
/* cache_type (instruction)= */ 1,
|
||||
&cci);
|
||||
if (status != 0) {
|
||||
printk(KERN_ERR
|
||||
"%s: ia64_pal_cache_config_info(l=%lu, 1) failed (status=%ld)\n",
|
||||
__FUNCTION__, l, status);
|
||||
/* The safest setup for "flush_icache_range()" */
|
||||
cci.pcci_stride = I_CACHE_STRIDE_SHIFT;
|
||||
}
|
||||
}
|
||||
if (cci.pcci_stride < ia64_i_cache_stride_shift)
|
||||
ia64_i_cache_stride_shift = cci.pcci_stride;
|
||||
}
|
||||
out:
|
||||
if (max > ia64_max_cacheline_size)
|
||||
ia64_max_cacheline_size = max;
|
||||
|
@ -36,12 +36,14 @@ int arch_register_cpu(int num)
|
||||
parent = &sysfs_nodes[cpu_to_node(num)];
|
||||
#endif /* CONFIG_NUMA */
|
||||
|
||||
#ifdef CONFIG_ACPI_BOOT
|
||||
/*
|
||||
* If CPEI cannot be re-targetted, and this is
|
||||
* CPEI target, then dont create the control file
|
||||
*/
|
||||
if (!can_cpei_retarget() && is_cpu_cpei_target(num))
|
||||
sysfs_cpus[num].cpu.no_control = 1;
|
||||
#endif
|
||||
|
||||
return register_cpu(&sysfs_cpus[num].cpu, num, parent);
|
||||
}
|
||||
|
@ -3,37 +3,59 @@
|
||||
*
|
||||
* Copyright (C) 1999-2001, 2005 Hewlett-Packard Co
|
||||
* David Mosberger-Tang <davidm@hpl.hp.com>
|
||||
*
|
||||
* 05/28/05 Zoltan Menyhart Dynamic stride size
|
||||
*/
|
||||
|
||||
#include <asm/asmmacro.h>
|
||||
#include <asm/page.h>
|
||||
|
||||
|
||||
/*
|
||||
* flush_icache_range(start,end)
|
||||
* Must flush range from start to end-1 but nothing else (need to
|
||||
*
|
||||
* Make i-cache(s) coherent with d-caches.
|
||||
*
|
||||
* Must deal with range from start to end-1 but nothing else (need to
|
||||
* be careful not to touch addresses that may be unmapped).
|
||||
*
|
||||
* Note: "in0" and "in1" are preserved for debugging purposes.
|
||||
*/
|
||||
GLOBAL_ENTRY(flush_icache_range)
|
||||
|
||||
.prologue
|
||||
alloc r2=ar.pfs,2,0,0,0
|
||||
sub r8=in1,in0,1
|
||||
alloc r2=ar.pfs,2,0,0,0
|
||||
movl r3=ia64_i_cache_stride_shift
|
||||
mov r21=1
|
||||
;;
|
||||
shr.u r8=r8,5 // we flush 32 bytes per iteration
|
||||
.save ar.lc, r3
|
||||
mov r3=ar.lc // save ar.lc
|
||||
ld8 r20=[r3] // r20: stride shift
|
||||
sub r22=in1,r0,1 // last byte address
|
||||
;;
|
||||
shr.u r23=in0,r20 // start / (stride size)
|
||||
shr.u r22=r22,r20 // (last byte address) / (stride size)
|
||||
shl r21=r21,r20 // r21: stride size of the i-cache(s)
|
||||
;;
|
||||
sub r8=r22,r23 // number of strides - 1
|
||||
shl r24=r23,r20 // r24: addresses for "fc.i" =
|
||||
// "start" rounded down to stride boundary
|
||||
.save ar.lc,r3
|
||||
mov r3=ar.lc // save ar.lc
|
||||
;;
|
||||
|
||||
.body
|
||||
|
||||
mov ar.lc=r8
|
||||
mov ar.lc=r8
|
||||
;;
|
||||
.Loop: fc.i in0 // issuable on M2 only
|
||||
add in0=32,in0
|
||||
/*
|
||||
* 32 byte aligned loop, even number of (actually 2) bundles
|
||||
*/
|
||||
.Loop: fc.i r24 // issuable on M0 only
|
||||
add r24=r21,r24 // we flush "stride size" bytes per iteration
|
||||
nop.i 0
|
||||
br.cloop.sptk.few .Loop
|
||||
;;
|
||||
sync.i
|
||||
;;
|
||||
srlz.i
|
||||
;;
|
||||
mov ar.lc=r3 // restore ar.lc
|
||||
mov ar.lc=r3 // restore ar.lc
|
||||
br.ret.sptk.many rp
|
||||
END(flush_icache_range)
|
||||
|
@ -157,6 +157,7 @@ alloc_pci_controller (int seg)
|
||||
|
||||
memset(controller, 0, sizeof(*controller));
|
||||
controller->segment = seg;
|
||||
controller->node = -1;
|
||||
return controller;
|
||||
}
|
||||
|
||||
@ -288,6 +289,7 @@ pci_acpi_scan_root(struct acpi_device *device, int domain, int bus)
|
||||
unsigned int windows = 0;
|
||||
struct pci_bus *pbus;
|
||||
char *name;
|
||||
int pxm;
|
||||
|
||||
controller = alloc_pci_controller(domain);
|
||||
if (!controller)
|
||||
@ -295,10 +297,16 @@ pci_acpi_scan_root(struct acpi_device *device, int domain, int bus)
|
||||
|
||||
controller->acpi_handle = device->handle;
|
||||
|
||||
pxm = acpi_get_pxm(controller->acpi_handle);
|
||||
#ifdef CONFIG_NUMA
|
||||
if (pxm >= 0)
|
||||
controller->node = pxm_to_nid_map[pxm];
|
||||
#endif
|
||||
|
||||
acpi_walk_resources(device->handle, METHOD_NAME__CRS, count_window,
|
||||
&windows);
|
||||
controller->window = kmalloc(sizeof(*controller->window) * windows,
|
||||
GFP_KERNEL);
|
||||
controller->window = kmalloc_node(sizeof(*controller->window) * windows,
|
||||
GFP_KERNEL, controller->node);
|
||||
if (!controller->window)
|
||||
goto out2;
|
||||
|
||||
|
@ -61,7 +61,7 @@ sn_default_pci_unmap(struct pci_dev *pdev, dma_addr_t addr, int direction)
|
||||
}
|
||||
|
||||
static void *
|
||||
sn_default_pci_bus_fixup(struct pcibus_bussoft *soft)
|
||||
sn_default_pci_bus_fixup(struct pcibus_bussoft *soft, struct pci_controller *controller)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
@ -362,7 +362,7 @@ void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
|
||||
|
||||
provider_soft = NULL;
|
||||
if (provider->bus_fixup)
|
||||
provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr);
|
||||
provider_soft = (*provider->bus_fixup) (prom_bussoft_ptr, controller);
|
||||
|
||||
if (provider_soft == NULL)
|
||||
return; /* fixup failed or not applicable */
|
||||
@ -380,6 +380,22 @@ void sn_pci_controller_fixup(int segment, int busnum, struct pci_bus *bus)
|
||||
SN_PCIBUS_BUSSOFT(bus)->bs_xwidget_info =
|
||||
&(hubdev_info->hdi_xwidget_info[SN_PCIBUS_BUSSOFT(bus)->bs_xid]);
|
||||
|
||||
/*
|
||||
* If the node information we obtained during the fixup phase is invalid
|
||||
* then set controller->node to -1 (undetermined)
|
||||
*/
|
||||
if (controller->node >= num_online_nodes()) {
|
||||
struct pcibus_bussoft *b = SN_PCIBUS_BUSSOFT(bus);
|
||||
|
||||
printk(KERN_WARNING "Device ASIC=%u XID=%u PBUSNUM=%lu"
|
||||
"L_IO=%lx L_MEM=%lx BASE=%lx\n",
|
||||
b->bs_asic_type, b->bs_xid, b->bs_persist_busnum,
|
||||
b->bs_legacy_io, b->bs_legacy_mem, b->bs_base);
|
||||
printk(KERN_WARNING "on node %d but only %d nodes online."
|
||||
"Association set to undetermined.\n",
|
||||
controller->node, num_online_nodes());
|
||||
controller->node = -1;
|
||||
}
|
||||
return;
|
||||
|
||||
error_return:
|
||||
|
@ -72,7 +72,7 @@ xpc_initialize_channels(struct xpc_partition *part, partid_t partid)
|
||||
enum xpc_retval
|
||||
xpc_setup_infrastructure(struct xpc_partition *part)
|
||||
{
|
||||
int ret;
|
||||
int ret, cpuid;
|
||||
struct timer_list *timer;
|
||||
partid_t partid = XPC_PARTID(part);
|
||||
|
||||
@ -223,9 +223,9 @@ xpc_setup_infrastructure(struct xpc_partition *part)
|
||||
xpc_vars_part[partid].openclose_args_pa =
|
||||
__pa(part->local_openclose_args);
|
||||
xpc_vars_part[partid].IPI_amo_pa = __pa(part->local_IPI_amo_va);
|
||||
xpc_vars_part[partid].IPI_nasid = cpuid_to_nasid(smp_processor_id());
|
||||
xpc_vars_part[partid].IPI_phys_cpuid =
|
||||
cpu_physical_id(smp_processor_id());
|
||||
cpuid = raw_smp_processor_id(); /* any CPU in this partition will do */
|
||||
xpc_vars_part[partid].IPI_nasid = cpuid_to_nasid(cpuid);
|
||||
xpc_vars_part[partid].IPI_phys_cpuid = cpu_physical_id(cpuid);
|
||||
xpc_vars_part[partid].nchannels = part->nchannels;
|
||||
xpc_vars_part[partid].magic = XPC_VP_MAGIC1;
|
||||
|
||||
|
@ -79,6 +79,7 @@ void *sn_dma_alloc_coherent(struct device *dev, size_t size,
|
||||
{
|
||||
void *cpuaddr;
|
||||
unsigned long phys_addr;
|
||||
int node;
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
|
||||
|
||||
@ -86,10 +87,19 @@ void *sn_dma_alloc_coherent(struct device *dev, size_t size,
|
||||
|
||||
/*
|
||||
* Allocate the memory.
|
||||
* FIXME: We should be doing alloc_pages_node for the node closest
|
||||
* to the PCI device.
|
||||
*/
|
||||
if (!(cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size))))
|
||||
node = pcibus_to_node(pdev->bus);
|
||||
if (likely(node >=0)) {
|
||||
struct page *p = alloc_pages_node(node, GFP_ATOMIC, get_order(size));
|
||||
|
||||
if (likely(p))
|
||||
cpuaddr = page_address(p);
|
||||
else
|
||||
return NULL;
|
||||
} else
|
||||
cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size));
|
||||
|
||||
if (unlikely(!cpuaddr))
|
||||
return NULL;
|
||||
|
||||
memset(cpuaddr, 0x0, size);
|
||||
|
@ -85,7 +85,7 @@ pcibr_error_intr_handler(int irq, void *arg, struct pt_regs *regs)
|
||||
}
|
||||
|
||||
void *
|
||||
pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft)
|
||||
pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *controller)
|
||||
{
|
||||
int nasid, cnode, j;
|
||||
struct hubdev_info *hubdev_info;
|
||||
@ -158,6 +158,14 @@ pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft)
|
||||
memset(soft->pbi_int_ate_resource.ate, 0,
|
||||
(soft->pbi_int_ate_size * sizeof(uint64_t)));
|
||||
|
||||
if (prom_bussoft->bs_asic_type == PCIIO_ASIC_TYPE_TIOCP)
|
||||
/*
|
||||
* TIO PCI Bridge with no closest node information.
|
||||
* FIXME: Find another way to determine the closest node
|
||||
*/
|
||||
controller->node = -1;
|
||||
else
|
||||
controller->node = cnode;
|
||||
return soft;
|
||||
}
|
||||
|
||||
|
@ -581,7 +581,7 @@ tioca_error_intr_handler(int irq, void *arg, struct pt_regs *pt)
|
||||
* the caller.
|
||||
*/
|
||||
static void *
|
||||
tioca_bus_fixup(struct pcibus_bussoft *prom_bussoft)
|
||||
tioca_bus_fixup(struct pcibus_bussoft *prom_bussoft, struct pci_controller *controller)
|
||||
{
|
||||
struct tioca_common *tioca_common;
|
||||
struct tioca_kernel *tioca_kern;
|
||||
@ -646,6 +646,8 @@ tioca_bus_fixup(struct pcibus_bussoft *prom_bussoft)
|
||||
__FUNCTION__, SGI_TIOCA_ERROR,
|
||||
(int)tioca_common->ca_common.bs_persist_busnum);
|
||||
|
||||
/* Setup locality information */
|
||||
controller->node = tioca_kern->ca_closest_node;
|
||||
return tioca_common;
|
||||
}
|
||||
|
||||
|
@ -6,12 +6,17 @@ MKIMAGE := $(srctree)/scripts/mkuboot.sh
|
||||
|
||||
extra-y := vmlinux.bin vmlinux.gz
|
||||
|
||||
# two make processes may write to vmlinux.gz at the same time with make -j
|
||||
quiet_cmd_mygzip = GZIP $@
|
||||
cmd_mygzip = gzip -f -9 < $< > $@.$$$$ && mv $@.$$$$ $@
|
||||
|
||||
|
||||
OBJCOPYFLAGS_vmlinux.bin := -O binary
|
||||
$(obj)/vmlinux.bin: vmlinux FORCE
|
||||
$(call if_changed,objcopy)
|
||||
|
||||
$(obj)/vmlinux.gz: $(obj)/vmlinux.bin FORCE
|
||||
$(call if_changed,gzip)
|
||||
$(call if_changed,mygzip)
|
||||
|
||||
quiet_cmd_uimage = UIMAGE $@
|
||||
cmd_uimage = $(CONFIG_SHELL) $(MKIMAGE) -A ppc -O linux -T kernel \
|
||||
|
@ -31,10 +31,13 @@ _GLOBAL(__970_cpu_preinit)
|
||||
*/
|
||||
mfspr r0,SPRN_PVR
|
||||
srwi r0,r0,16
|
||||
cmpwi cr0,r0,0x39
|
||||
cmpwi cr1,r0,0x3c
|
||||
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
|
||||
cmpwi r0,0x39
|
||||
beq 1f
|
||||
cmpwi r0,0x3c
|
||||
beq 1f
|
||||
cmpwi r0,0x44
|
||||
bnelr
|
||||
1:
|
||||
|
||||
/* Make sure HID4:rm_ci is off before MMU is turned off, that large
|
||||
* pages are enabled with HID4:61 and clear HID5:DCBZ_size and
|
||||
@ -133,12 +136,14 @@ _GLOBAL(__save_cpu_setup)
|
||||
/* We only deal with 970 for now */
|
||||
mfspr r0,SPRN_PVR
|
||||
srwi r0,r0,16
|
||||
cmpwi cr0,r0,0x39
|
||||
cmpwi cr1,r0,0x3c
|
||||
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
|
||||
bne 1f
|
||||
cmpwi r0,0x39
|
||||
beq 1f
|
||||
cmpwi r0,0x3c
|
||||
beq 1f
|
||||
cmpwi r0,0x44
|
||||
bne 2f
|
||||
|
||||
/* Save HID0,1,4 and 5 */
|
||||
1: /* Save HID0,1,4 and 5 */
|
||||
mfspr r3,SPRN_HID0
|
||||
std r3,CS_HID0(r5)
|
||||
mfspr r3,SPRN_HID1
|
||||
@ -148,7 +153,7 @@ _GLOBAL(__save_cpu_setup)
|
||||
mfspr r3,SPRN_HID5
|
||||
std r3,CS_HID5(r5)
|
||||
|
||||
1:
|
||||
2:
|
||||
mtcr r7
|
||||
blr
|
||||
|
||||
@ -165,12 +170,14 @@ _GLOBAL(__restore_cpu_setup)
|
||||
/* We only deal with 970 for now */
|
||||
mfspr r0,SPRN_PVR
|
||||
srwi r0,r0,16
|
||||
cmpwi cr0,r0,0x39
|
||||
cmpwi cr1,r0,0x3c
|
||||
cror 4*cr0+eq,4*cr0+eq,4*cr1+eq
|
||||
bne 1f
|
||||
cmpwi r0,0x39
|
||||
beq 1f
|
||||
cmpwi r0,0x3c
|
||||
beq 1f
|
||||
cmpwi r0,0x44
|
||||
bnelr
|
||||
|
||||
/* Before accessing memory, we make sure rm_ci is clear */
|
||||
1: /* Before accessing memory, we make sure rm_ci is clear */
|
||||
li r0,0
|
||||
mfspr r3,SPRN_HID4
|
||||
rldimi r3,r0,40,23 /* clear bit 23 (rm_ci) */
|
||||
@ -223,6 +230,5 @@ _GLOBAL(__restore_cpu_setup)
|
||||
mtspr SPRN_HID5,r3
|
||||
sync
|
||||
isync
|
||||
1:
|
||||
blr
|
||||
|
||||
|
@ -183,6 +183,21 @@ struct cpu_spec cpu_specs[] = {
|
||||
.cpu_setup = __setup_cpu_ppc970,
|
||||
.firmware_features = COMMON_PPC64_FW,
|
||||
},
|
||||
{ /* PPC970MP */
|
||||
.pvr_mask = 0xffff0000,
|
||||
.pvr_value = 0x00440000,
|
||||
.cpu_name = "PPC970MP",
|
||||
.cpu_features = CPU_FTR_SPLIT_ID_CACHE |
|
||||
CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE |
|
||||
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP |
|
||||
CPU_FTR_CAN_NAP | CPU_FTR_PMC8 | CPU_FTR_MMCRA,
|
||||
.cpu_user_features = COMMON_USER_PPC64 |
|
||||
PPC_FEATURE_HAS_ALTIVEC_COMP,
|
||||
.icache_bsize = 128,
|
||||
.dcache_bsize = 128,
|
||||
.cpu_setup = __setup_cpu_ppc970,
|
||||
.firmware_features = COMMON_PPC64_FW,
|
||||
},
|
||||
{ /* Power5 */
|
||||
.pvr_mask = 0xffff0000,
|
||||
.pvr_value = 0x003a0000,
|
||||
|
@ -38,11 +38,12 @@ static inline void iSeries_hunlock(unsigned long slot)
|
||||
}
|
||||
|
||||
static long iSeries_hpte_insert(unsigned long hpte_group, unsigned long va,
|
||||
unsigned long prpn, int secondary,
|
||||
unsigned long hpteflags, int bolted, int large)
|
||||
unsigned long prpn, unsigned long vflags,
|
||||
unsigned long rflags)
|
||||
{
|
||||
long slot;
|
||||
HPTE lhpte;
|
||||
hpte_t lhpte;
|
||||
int secondary = 0;
|
||||
|
||||
/*
|
||||
* The hypervisor tries both primary and secondary.
|
||||
@ -50,13 +51,13 @@ static long iSeries_hpte_insert(unsigned long hpte_group, unsigned long va,
|
||||
* it means we have already tried both primary and secondary,
|
||||
* so we return failure immediately.
|
||||
*/
|
||||
if (secondary)
|
||||
if (vflags & HPTE_V_SECONDARY)
|
||||
return -1;
|
||||
|
||||
iSeries_hlock(hpte_group);
|
||||
|
||||
slot = HvCallHpt_findValid(&lhpte, va >> PAGE_SHIFT);
|
||||
BUG_ON(lhpte.dw0.dw0.v);
|
||||
BUG_ON(lhpte.v & HPTE_V_VALID);
|
||||
|
||||
if (slot == -1) { /* No available entry found in either group */
|
||||
iSeries_hunlock(hpte_group);
|
||||
@ -64,19 +65,13 @@ static long iSeries_hpte_insert(unsigned long hpte_group, unsigned long va,
|
||||
}
|
||||
|
||||
if (slot < 0) { /* MSB set means secondary group */
|
||||
vflags |= HPTE_V_VALID;
|
||||
secondary = 1;
|
||||
slot &= 0x7fffffffffffffff;
|
||||
}
|
||||
|
||||
lhpte.dw1.dword1 = 0;
|
||||
lhpte.dw1.dw1.rpn = physRpn_to_absRpn(prpn);
|
||||
lhpte.dw1.flags.flags = hpteflags;
|
||||
|
||||
lhpte.dw0.dword0 = 0;
|
||||
lhpte.dw0.dw0.avpn = va >> 23;
|
||||
lhpte.dw0.dw0.h = secondary;
|
||||
lhpte.dw0.dw0.bolted = bolted;
|
||||
lhpte.dw0.dw0.v = 1;
|
||||
lhpte.v = (va >> 23) << HPTE_V_AVPN_SHIFT | vflags | HPTE_V_VALID;
|
||||
lhpte.r = (physRpn_to_absRpn(prpn) << HPTE_R_RPN_SHIFT) | rflags;
|
||||
|
||||
/* Now fill in the actual HPTE */
|
||||
HvCallHpt_addValidate(slot, secondary, &lhpte);
|
||||
@ -88,20 +83,17 @@ static long iSeries_hpte_insert(unsigned long hpte_group, unsigned long va,
|
||||
|
||||
static unsigned long iSeries_hpte_getword0(unsigned long slot)
|
||||
{
|
||||
unsigned long dword0;
|
||||
HPTE hpte;
|
||||
hpte_t hpte;
|
||||
|
||||
HvCallHpt_get(&hpte, slot);
|
||||
dword0 = hpte.dw0.dword0;
|
||||
|
||||
return dword0;
|
||||
return hpte.v;
|
||||
}
|
||||
|
||||
static long iSeries_hpte_remove(unsigned long hpte_group)
|
||||
{
|
||||
unsigned long slot_offset;
|
||||
int i;
|
||||
HPTE lhpte;
|
||||
unsigned long hpte_v;
|
||||
|
||||
/* Pick a random slot to start at */
|
||||
slot_offset = mftb() & 0x7;
|
||||
@ -109,10 +101,9 @@ static long iSeries_hpte_remove(unsigned long hpte_group)
|
||||
iSeries_hlock(hpte_group);
|
||||
|
||||
for (i = 0; i < HPTES_PER_GROUP; i++) {
|
||||
lhpte.dw0.dword0 =
|
||||
iSeries_hpte_getword0(hpte_group + slot_offset);
|
||||
hpte_v = iSeries_hpte_getword0(hpte_group + slot_offset);
|
||||
|
||||
if (!lhpte.dw0.dw0.bolted) {
|
||||
if (! (hpte_v & HPTE_V_BOLTED)) {
|
||||
HvCallHpt_invalidateSetSwBitsGet(hpte_group +
|
||||
slot_offset, 0, 0);
|
||||
iSeries_hunlock(hpte_group);
|
||||
@ -137,13 +128,13 @@ static long iSeries_hpte_remove(unsigned long hpte_group)
|
||||
static long iSeries_hpte_updatepp(unsigned long slot, unsigned long newpp,
|
||||
unsigned long va, int large, int local)
|
||||
{
|
||||
HPTE hpte;
|
||||
hpte_t hpte;
|
||||
unsigned long avpn = va >> 23;
|
||||
|
||||
iSeries_hlock(slot);
|
||||
|
||||
HvCallHpt_get(&hpte, slot);
|
||||
if ((hpte.dw0.dw0.avpn == avpn) && (hpte.dw0.dw0.v)) {
|
||||
if ((HPTE_V_AVPN_VAL(hpte.v) == avpn) && (hpte.v & HPTE_V_VALID)) {
|
||||
/*
|
||||
* Hypervisor expects bits as NPPP, which is
|
||||
* different from how they are mapped in our PP.
|
||||
@ -167,7 +158,7 @@ static long iSeries_hpte_updatepp(unsigned long slot, unsigned long newpp,
|
||||
*/
|
||||
static long iSeries_hpte_find(unsigned long vpn)
|
||||
{
|
||||
HPTE hpte;
|
||||
hpte_t hpte;
|
||||
long slot;
|
||||
|
||||
/*
|
||||
@ -177,7 +168,7 @@ static long iSeries_hpte_find(unsigned long vpn)
|
||||
* 0x80000000xxxxxxxx : Entry found in secondary group, slot x
|
||||
*/
|
||||
slot = HvCallHpt_findValid(&hpte, vpn);
|
||||
if (hpte.dw0.dw0.v) {
|
||||
if (hpte.v & HPTE_V_VALID) {
|
||||
if (slot < 0) {
|
||||
slot &= 0x7fffffffffffffff;
|
||||
slot = -slot;
|
||||
@ -212,7 +203,7 @@ static void iSeries_hpte_updateboltedpp(unsigned long newpp, unsigned long ea)
|
||||
static void iSeries_hpte_invalidate(unsigned long slot, unsigned long va,
|
||||
int large, int local)
|
||||
{
|
||||
HPTE lhpte;
|
||||
unsigned long hpte_v;
|
||||
unsigned long avpn = va >> 23;
|
||||
unsigned long flags;
|
||||
|
||||
@ -220,9 +211,9 @@ static void iSeries_hpte_invalidate(unsigned long slot, unsigned long va,
|
||||
|
||||
iSeries_hlock(slot);
|
||||
|
||||
lhpte.dw0.dword0 = iSeries_hpte_getword0(slot);
|
||||
hpte_v = iSeries_hpte_getword0(slot);
|
||||
|
||||
if ((lhpte.dw0.dw0.avpn == avpn) && lhpte.dw0.dw0.v)
|
||||
if ((HPTE_V_AVPN_VAL(hpte_v) == avpn) && (hpte_v & HPTE_V_VALID))
|
||||
HvCallHpt_invalidateSetSwBitsGet(slot, 0, 0);
|
||||
|
||||
iSeries_hunlock(slot);
|
||||
|
@ -503,7 +503,7 @@ static void __init build_iSeries_Memory_Map(void)
|
||||
|
||||
/* Fill in the hashed page table hash mask */
|
||||
num_ptegs = hptSizePages *
|
||||
(PAGE_SIZE / (sizeof(HPTE) * HPTES_PER_GROUP));
|
||||
(PAGE_SIZE / (sizeof(hpte_t) * HPTES_PER_GROUP));
|
||||
htab_hash_mask = num_ptegs - 1;
|
||||
|
||||
/*
|
||||
@ -618,25 +618,23 @@ static void __init setup_iSeries_cache_sizes(void)
|
||||
static void iSeries_make_pte(unsigned long va, unsigned long pa,
|
||||
int mode)
|
||||
{
|
||||
HPTE local_hpte, rhpte;
|
||||
hpte_t local_hpte, rhpte;
|
||||
unsigned long hash, vpn;
|
||||
long slot;
|
||||
|
||||
vpn = va >> PAGE_SHIFT;
|
||||
hash = hpt_hash(vpn, 0);
|
||||
|
||||
local_hpte.dw1.dword1 = pa | mode;
|
||||
local_hpte.dw0.dword0 = 0;
|
||||
local_hpte.dw0.dw0.avpn = va >> 23;
|
||||
local_hpte.dw0.dw0.bolted = 1; /* bolted */
|
||||
local_hpte.dw0.dw0.v = 1;
|
||||
local_hpte.r = pa | mode;
|
||||
local_hpte.v = ((va >> 23) << HPTE_V_AVPN_SHIFT)
|
||||
| HPTE_V_BOLTED | HPTE_V_VALID;
|
||||
|
||||
slot = HvCallHpt_findValid(&rhpte, vpn);
|
||||
if (slot < 0) {
|
||||
/* Must find space in primary group */
|
||||
panic("hash_page: hpte already exists\n");
|
||||
}
|
||||
HvCallHpt_addValidate(slot, 0, (HPTE *)&local_hpte );
|
||||
HvCallHpt_addValidate(slot, 0, &local_hpte);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -646,7 +644,7 @@ static void __init iSeries_bolt_kernel(unsigned long saddr, unsigned long eaddr)
|
||||
{
|
||||
unsigned long pa;
|
||||
unsigned long mode_rw = _PAGE_ACCESSED | _PAGE_COHERENT | PP_RWXX;
|
||||
HPTE hpte;
|
||||
hpte_t hpte;
|
||||
|
||||
for (pa = saddr; pa < eaddr ;pa += PAGE_SIZE) {
|
||||
unsigned long ea = (unsigned long)__va(pa);
|
||||
@ -659,7 +657,7 @@ static void __init iSeries_bolt_kernel(unsigned long saddr, unsigned long eaddr)
|
||||
if (!in_kernel_text(ea))
|
||||
mode_rw |= HW_NO_EXEC;
|
||||
|
||||
if (hpte.dw0.dw0.v) {
|
||||
if (hpte.v & HPTE_V_VALID) {
|
||||
/* HPTE exists, so just bolt it */
|
||||
HvCallHpt_setSwBits(slot, 0x10, 0);
|
||||
/* And make sure the pp bits are correct */
|
||||
|
@ -277,31 +277,20 @@ void vpa_init(int cpu)
|
||||
|
||||
long pSeries_lpar_hpte_insert(unsigned long hpte_group,
|
||||
unsigned long va, unsigned long prpn,
|
||||
int secondary, unsigned long hpteflags,
|
||||
int bolted, int large)
|
||||
unsigned long vflags, unsigned long rflags)
|
||||
{
|
||||
unsigned long arpn = physRpn_to_absRpn(prpn);
|
||||
unsigned long lpar_rc;
|
||||
unsigned long flags;
|
||||
unsigned long slot;
|
||||
HPTE lhpte;
|
||||
unsigned long hpte_v, hpte_r;
|
||||
unsigned long dummy0, dummy1;
|
||||
|
||||
/* Fill in the local HPTE with absolute rpn, avpn and flags */
|
||||
lhpte.dw1.dword1 = 0;
|
||||
lhpte.dw1.dw1.rpn = arpn;
|
||||
lhpte.dw1.flags.flags = hpteflags;
|
||||
hpte_v = ((va >> 23) << HPTE_V_AVPN_SHIFT) | vflags | HPTE_V_VALID;
|
||||
if (vflags & HPTE_V_LARGE)
|
||||
hpte_v &= ~(1UL << HPTE_V_AVPN_SHIFT);
|
||||
|
||||
lhpte.dw0.dword0 = 0;
|
||||
lhpte.dw0.dw0.avpn = va >> 23;
|
||||
lhpte.dw0.dw0.h = secondary;
|
||||
lhpte.dw0.dw0.bolted = bolted;
|
||||
lhpte.dw0.dw0.v = 1;
|
||||
|
||||
if (large) {
|
||||
lhpte.dw0.dw0.l = 1;
|
||||
lhpte.dw0.dw0.avpn &= ~0x1UL;
|
||||
}
|
||||
hpte_r = (arpn << HPTE_R_RPN_SHIFT) | rflags;
|
||||
|
||||
/* Now fill in the actual HPTE */
|
||||
/* Set CEC cookie to 0 */
|
||||
@ -312,11 +301,11 @@ long pSeries_lpar_hpte_insert(unsigned long hpte_group,
|
||||
flags = 0;
|
||||
|
||||
/* XXX why is this here? - Anton */
|
||||
if (hpteflags & (_PAGE_GUARDED|_PAGE_NO_CACHE))
|
||||
lhpte.dw1.flags.flags &= ~_PAGE_COHERENT;
|
||||
if (rflags & (_PAGE_GUARDED|_PAGE_NO_CACHE))
|
||||
hpte_r &= ~_PAGE_COHERENT;
|
||||
|
||||
lpar_rc = plpar_hcall(H_ENTER, flags, hpte_group, lhpte.dw0.dword0,
|
||||
lhpte.dw1.dword1, &slot, &dummy0, &dummy1);
|
||||
lpar_rc = plpar_hcall(H_ENTER, flags, hpte_group, hpte_v,
|
||||
hpte_r, &slot, &dummy0, &dummy1);
|
||||
|
||||
if (unlikely(lpar_rc == H_PTEG_Full))
|
||||
return -1;
|
||||
@ -332,7 +321,7 @@ long pSeries_lpar_hpte_insert(unsigned long hpte_group,
|
||||
/* Because of iSeries, we have to pass down the secondary
|
||||
* bucket bit here as well
|
||||
*/
|
||||
return (slot & 7) | (secondary << 3);
|
||||
return (slot & 7) | (!!(vflags & HPTE_V_SECONDARY) << 3);
|
||||
}
|
||||
|
||||
static DEFINE_SPINLOCK(pSeries_lpar_tlbie_lock);
|
||||
@ -427,22 +416,18 @@ static long pSeries_lpar_hpte_find(unsigned long vpn)
|
||||
unsigned long hash;
|
||||
unsigned long i, j;
|
||||
long slot;
|
||||
union {
|
||||
unsigned long dword0;
|
||||
Hpte_dword0 dw0;
|
||||
} hpte_dw0;
|
||||
Hpte_dword0 dw0;
|
||||
unsigned long hpte_v;
|
||||
|
||||
hash = hpt_hash(vpn, 0);
|
||||
|
||||
for (j = 0; j < 2; j++) {
|
||||
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
|
||||
for (i = 0; i < HPTES_PER_GROUP; i++) {
|
||||
hpte_dw0.dword0 = pSeries_lpar_hpte_getword0(slot);
|
||||
dw0 = hpte_dw0.dw0;
|
||||
hpte_v = pSeries_lpar_hpte_getword0(slot);
|
||||
|
||||
if ((dw0.avpn == (vpn >> 11)) && dw0.v &&
|
||||
(dw0.h == j)) {
|
||||
if ((HPTE_V_AVPN_VAL(hpte_v) == (vpn >> 11))
|
||||
&& (hpte_v & HPTE_V_VALID)
|
||||
&& (!!(hpte_v & HPTE_V_SECONDARY) == j)) {
|
||||
/* HPTE matches */
|
||||
if (j)
|
||||
slot = -slot;
|
||||
|
@ -170,9 +170,7 @@ htab_insert_pte:
|
||||
/* Call ppc_md.hpte_insert */
|
||||
ld r7,STK_PARM(r4)(r1) /* Retreive new pp bits */
|
||||
mr r4,r29 /* Retreive va */
|
||||
li r6,0 /* primary slot */
|
||||
li r8,0 /* not bolted and not large */
|
||||
li r9,0
|
||||
li r6,0 /* no vflags */
|
||||
_GLOBAL(htab_call_hpte_insert1)
|
||||
bl . /* Will be patched by htab_finish_init() */
|
||||
cmpdi 0,r3,0
|
||||
@ -192,9 +190,7 @@ _GLOBAL(htab_call_hpte_insert1)
|
||||
/* Call ppc_md.hpte_insert */
|
||||
ld r7,STK_PARM(r4)(r1) /* Retreive new pp bits */
|
||||
mr r4,r29 /* Retreive va */
|
||||
li r6,1 /* secondary slot */
|
||||
li r8,0 /* not bolted and not large */
|
||||
li r9,0
|
||||
li r6,HPTE_V_SECONDARY@l /* secondary slot */
|
||||
_GLOBAL(htab_call_hpte_insert2)
|
||||
bl . /* Will be patched by htab_finish_init() */
|
||||
cmpdi 0,r3,0
|
||||
|
@ -27,9 +27,9 @@
|
||||
|
||||
static DEFINE_SPINLOCK(native_tlbie_lock);
|
||||
|
||||
static inline void native_lock_hpte(HPTE *hptep)
|
||||
static inline void native_lock_hpte(hpte_t *hptep)
|
||||
{
|
||||
unsigned long *word = &hptep->dw0.dword0;
|
||||
unsigned long *word = &hptep->v;
|
||||
|
||||
while (1) {
|
||||
if (!test_and_set_bit(HPTE_LOCK_BIT, word))
|
||||
@ -39,32 +39,28 @@ static inline void native_lock_hpte(HPTE *hptep)
|
||||
}
|
||||
}
|
||||
|
||||
static inline void native_unlock_hpte(HPTE *hptep)
|
||||
static inline void native_unlock_hpte(hpte_t *hptep)
|
||||
{
|
||||
unsigned long *word = &hptep->dw0.dword0;
|
||||
unsigned long *word = &hptep->v;
|
||||
|
||||
asm volatile("lwsync":::"memory");
|
||||
clear_bit(HPTE_LOCK_BIT, word);
|
||||
}
|
||||
|
||||
long native_hpte_insert(unsigned long hpte_group, unsigned long va,
|
||||
unsigned long prpn, int secondary,
|
||||
unsigned long hpteflags, int bolted, int large)
|
||||
unsigned long prpn, unsigned long vflags,
|
||||
unsigned long rflags)
|
||||
{
|
||||
unsigned long arpn = physRpn_to_absRpn(prpn);
|
||||
HPTE *hptep = htab_address + hpte_group;
|
||||
Hpte_dword0 dw0;
|
||||
HPTE lhpte;
|
||||
hpte_t *hptep = htab_address + hpte_group;
|
||||
unsigned long hpte_v, hpte_r;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < HPTES_PER_GROUP; i++) {
|
||||
dw0 = hptep->dw0.dw0;
|
||||
|
||||
if (!dw0.v) {
|
||||
if (! (hptep->v & HPTE_V_VALID)) {
|
||||
/* retry with lock held */
|
||||
native_lock_hpte(hptep);
|
||||
dw0 = hptep->dw0.dw0;
|
||||
if (!dw0.v)
|
||||
if (! (hptep->v & HPTE_V_VALID))
|
||||
break;
|
||||
native_unlock_hpte(hptep);
|
||||
}
|
||||
@ -75,56 +71,45 @@ long native_hpte_insert(unsigned long hpte_group, unsigned long va,
|
||||
if (i == HPTES_PER_GROUP)
|
||||
return -1;
|
||||
|
||||
lhpte.dw1.dword1 = 0;
|
||||
lhpte.dw1.dw1.rpn = arpn;
|
||||
lhpte.dw1.flags.flags = hpteflags;
|
||||
|
||||
lhpte.dw0.dword0 = 0;
|
||||
lhpte.dw0.dw0.avpn = va >> 23;
|
||||
lhpte.dw0.dw0.h = secondary;
|
||||
lhpte.dw0.dw0.bolted = bolted;
|
||||
lhpte.dw0.dw0.v = 1;
|
||||
|
||||
if (large) {
|
||||
lhpte.dw0.dw0.l = 1;
|
||||
lhpte.dw0.dw0.avpn &= ~0x1UL;
|
||||
}
|
||||
|
||||
hptep->dw1.dword1 = lhpte.dw1.dword1;
|
||||
hpte_v = (va >> 23) << HPTE_V_AVPN_SHIFT | vflags | HPTE_V_VALID;
|
||||
if (vflags & HPTE_V_LARGE)
|
||||
va &= ~(1UL << HPTE_V_AVPN_SHIFT);
|
||||
hpte_r = (arpn << HPTE_R_RPN_SHIFT) | rflags;
|
||||
|
||||
hptep->r = hpte_r;
|
||||
/* Guarantee the second dword is visible before the valid bit */
|
||||
__asm__ __volatile__ ("eieio" : : : "memory");
|
||||
|
||||
/*
|
||||
* Now set the first dword including the valid bit
|
||||
* NOTE: this also unlocks the hpte
|
||||
*/
|
||||
hptep->dw0.dword0 = lhpte.dw0.dword0;
|
||||
hptep->v = hpte_v;
|
||||
|
||||
__asm__ __volatile__ ("ptesync" : : : "memory");
|
||||
|
||||
return i | (secondary << 3);
|
||||
return i | (!!(vflags & HPTE_V_SECONDARY) << 3);
|
||||
}
|
||||
|
||||
static long native_hpte_remove(unsigned long hpte_group)
|
||||
{
|
||||
HPTE *hptep;
|
||||
Hpte_dword0 dw0;
|
||||
hpte_t *hptep;
|
||||
int i;
|
||||
int slot_offset;
|
||||
unsigned long hpte_v;
|
||||
|
||||
/* pick a random entry to start at */
|
||||
slot_offset = mftb() & 0x7;
|
||||
|
||||
for (i = 0; i < HPTES_PER_GROUP; i++) {
|
||||
hptep = htab_address + hpte_group + slot_offset;
|
||||
dw0 = hptep->dw0.dw0;
|
||||
hpte_v = hptep->v;
|
||||
|
||||
if (dw0.v && !dw0.bolted) {
|
||||
if ((hpte_v & HPTE_V_VALID) && !(hpte_v & HPTE_V_BOLTED)) {
|
||||
/* retry with lock held */
|
||||
native_lock_hpte(hptep);
|
||||
dw0 = hptep->dw0.dw0;
|
||||
if (dw0.v && !dw0.bolted)
|
||||
hpte_v = hptep->v;
|
||||
if ((hpte_v & HPTE_V_VALID)
|
||||
&& !(hpte_v & HPTE_V_BOLTED))
|
||||
break;
|
||||
native_unlock_hpte(hptep);
|
||||
}
|
||||
@ -137,15 +122,15 @@ static long native_hpte_remove(unsigned long hpte_group)
|
||||
return -1;
|
||||
|
||||
/* Invalidate the hpte. NOTE: this also unlocks it */
|
||||
hptep->dw0.dword0 = 0;
|
||||
hptep->v = 0;
|
||||
|
||||
return i;
|
||||
}
|
||||
|
||||
static inline void set_pp_bit(unsigned long pp, HPTE *addr)
|
||||
static inline void set_pp_bit(unsigned long pp, hpte_t *addr)
|
||||
{
|
||||
unsigned long old;
|
||||
unsigned long *p = &addr->dw1.dword1;
|
||||
unsigned long *p = &addr->r;
|
||||
|
||||
__asm__ __volatile__(
|
||||
"1: ldarx %0,0,%3\n\
|
||||
@ -163,11 +148,11 @@ static inline void set_pp_bit(unsigned long pp, HPTE *addr)
|
||||
*/
|
||||
static long native_hpte_find(unsigned long vpn)
|
||||
{
|
||||
HPTE *hptep;
|
||||
hpte_t *hptep;
|
||||
unsigned long hash;
|
||||
unsigned long i, j;
|
||||
long slot;
|
||||
Hpte_dword0 dw0;
|
||||
unsigned long hpte_v;
|
||||
|
||||
hash = hpt_hash(vpn, 0);
|
||||
|
||||
@ -175,10 +160,11 @@ static long native_hpte_find(unsigned long vpn)
|
||||
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
|
||||
for (i = 0; i < HPTES_PER_GROUP; i++) {
|
||||
hptep = htab_address + slot;
|
||||
dw0 = hptep->dw0.dw0;
|
||||
hpte_v = hptep->v;
|
||||
|
||||
if ((dw0.avpn == (vpn >> 11)) && dw0.v &&
|
||||
(dw0.h == j)) {
|
||||
if ((HPTE_V_AVPN_VAL(hpte_v) == (vpn >> 11))
|
||||
&& (hpte_v & HPTE_V_VALID)
|
||||
&& ( !!(hpte_v & HPTE_V_SECONDARY) == j)) {
|
||||
/* HPTE matches */
|
||||
if (j)
|
||||
slot = -slot;
|
||||
@ -195,20 +181,21 @@ static long native_hpte_find(unsigned long vpn)
|
||||
static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
|
||||
unsigned long va, int large, int local)
|
||||
{
|
||||
HPTE *hptep = htab_address + slot;
|
||||
Hpte_dword0 dw0;
|
||||
hpte_t *hptep = htab_address + slot;
|
||||
unsigned long hpte_v;
|
||||
unsigned long avpn = va >> 23;
|
||||
int ret = 0;
|
||||
|
||||
if (large)
|
||||
avpn &= ~0x1UL;
|
||||
avpn &= ~1;
|
||||
|
||||
native_lock_hpte(hptep);
|
||||
|
||||
dw0 = hptep->dw0.dw0;
|
||||
hpte_v = hptep->v;
|
||||
|
||||
/* Even if we miss, we need to invalidate the TLB */
|
||||
if ((dw0.avpn != avpn) || !dw0.v) {
|
||||
if ((HPTE_V_AVPN_VAL(hpte_v) != avpn)
|
||||
|| !(hpte_v & HPTE_V_VALID)) {
|
||||
native_unlock_hpte(hptep);
|
||||
ret = -1;
|
||||
} else {
|
||||
@ -244,7 +231,7 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea)
|
||||
{
|
||||
unsigned long vsid, va, vpn, flags = 0;
|
||||
long slot;
|
||||
HPTE *hptep;
|
||||
hpte_t *hptep;
|
||||
int lock_tlbie = !cpu_has_feature(CPU_FTR_LOCKLESS_TLBIE);
|
||||
|
||||
vsid = get_kernel_vsid(ea);
|
||||
@ -269,26 +256,27 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea)
|
||||
static void native_hpte_invalidate(unsigned long slot, unsigned long va,
|
||||
int large, int local)
|
||||
{
|
||||
HPTE *hptep = htab_address + slot;
|
||||
Hpte_dword0 dw0;
|
||||
hpte_t *hptep = htab_address + slot;
|
||||
unsigned long hpte_v;
|
||||
unsigned long avpn = va >> 23;
|
||||
unsigned long flags;
|
||||
int lock_tlbie = !cpu_has_feature(CPU_FTR_LOCKLESS_TLBIE);
|
||||
|
||||
if (large)
|
||||
avpn &= ~0x1UL;
|
||||
avpn &= ~1;
|
||||
|
||||
local_irq_save(flags);
|
||||
native_lock_hpte(hptep);
|
||||
|
||||
dw0 = hptep->dw0.dw0;
|
||||
hpte_v = hptep->v;
|
||||
|
||||
/* Even if we miss, we need to invalidate the TLB */
|
||||
if ((dw0.avpn != avpn) || !dw0.v) {
|
||||
if ((HPTE_V_AVPN_VAL(hpte_v) != avpn)
|
||||
|| !(hpte_v & HPTE_V_VALID)) {
|
||||
native_unlock_hpte(hptep);
|
||||
} else {
|
||||
/* Invalidate the hpte. NOTE: this also unlocks it */
|
||||
hptep->dw0.dword0 = 0;
|
||||
hptep->v = 0;
|
||||
}
|
||||
|
||||
/* Invalidate the tlb */
|
||||
@ -315,8 +303,8 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long va,
|
||||
static void native_hpte_clear(void)
|
||||
{
|
||||
unsigned long slot, slots, flags;
|
||||
HPTE *hptep = htab_address;
|
||||
Hpte_dword0 dw0;
|
||||
hpte_t *hptep = htab_address;
|
||||
unsigned long hpte_v;
|
||||
unsigned long pteg_count;
|
||||
|
||||
pteg_count = htab_hash_mask + 1;
|
||||
@ -336,11 +324,11 @@ static void native_hpte_clear(void)
|
||||
* running, right? and for crash dump, we probably
|
||||
* don't want to wait for a maybe bad cpu.
|
||||
*/
|
||||
dw0 = hptep->dw0.dw0;
|
||||
hpte_v = hptep->v;
|
||||
|
||||
if (dw0.v) {
|
||||
hptep->dw0.dword0 = 0;
|
||||
tlbie(slot2va(dw0.avpn, dw0.l, dw0.h, slot), dw0.l);
|
||||
if (hpte_v & HPTE_V_VALID) {
|
||||
hptep->v = 0;
|
||||
tlbie(slot2va(hpte_v, slot), hpte_v & HPTE_V_LARGE);
|
||||
}
|
||||
}
|
||||
|
||||
@ -353,8 +341,8 @@ static void native_flush_hash_range(unsigned long context,
|
||||
{
|
||||
unsigned long vsid, vpn, va, hash, secondary, slot, flags, avpn;
|
||||
int i, j;
|
||||
HPTE *hptep;
|
||||
Hpte_dword0 dw0;
|
||||
hpte_t *hptep;
|
||||
unsigned long hpte_v;
|
||||
struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
|
||||
|
||||
/* XXX fix for large ptes */
|
||||
@ -390,14 +378,15 @@ static void native_flush_hash_range(unsigned long context,
|
||||
|
||||
native_lock_hpte(hptep);
|
||||
|
||||
dw0 = hptep->dw0.dw0;
|
||||
hpte_v = hptep->v;
|
||||
|
||||
/* Even if we miss, we need to invalidate the TLB */
|
||||
if ((dw0.avpn != avpn) || !dw0.v) {
|
||||
if ((HPTE_V_AVPN_VAL(hpte_v) != avpn)
|
||||
|| !(hpte_v & HPTE_V_VALID)) {
|
||||
native_unlock_hpte(hptep);
|
||||
} else {
|
||||
/* Invalidate the hpte. NOTE: this also unlocks it */
|
||||
hptep->dw0.dword0 = 0;
|
||||
hptep->v = 0;
|
||||
}
|
||||
|
||||
j++;
|
||||
|
@ -75,8 +75,8 @@
|
||||
extern unsigned long dart_tablebase;
|
||||
#endif /* CONFIG_U3_DART */
|
||||
|
||||
HPTE *htab_address;
|
||||
unsigned long htab_hash_mask;
|
||||
hpte_t *htab_address;
|
||||
unsigned long htab_hash_mask;
|
||||
|
||||
extern unsigned long _SDR1;
|
||||
|
||||
@ -97,11 +97,15 @@ static inline void create_pte_mapping(unsigned long start, unsigned long end,
|
||||
unsigned long addr;
|
||||
unsigned int step;
|
||||
unsigned long tmp_mode;
|
||||
unsigned long vflags;
|
||||
|
||||
if (large)
|
||||
if (large) {
|
||||
step = 16*MB;
|
||||
else
|
||||
vflags = HPTE_V_BOLTED | HPTE_V_LARGE;
|
||||
} else {
|
||||
step = 4*KB;
|
||||
vflags = HPTE_V_BOLTED;
|
||||
}
|
||||
|
||||
for (addr = start; addr < end; addr += step) {
|
||||
unsigned long vpn, hash, hpteg;
|
||||
@ -129,12 +133,12 @@ static inline void create_pte_mapping(unsigned long start, unsigned long end,
|
||||
if (systemcfg->platform & PLATFORM_LPAR)
|
||||
ret = pSeries_lpar_hpte_insert(hpteg, va,
|
||||
virt_to_abs(addr) >> PAGE_SHIFT,
|
||||
0, tmp_mode, 1, large);
|
||||
vflags, tmp_mode);
|
||||
else
|
||||
#endif /* CONFIG_PPC_PSERIES */
|
||||
ret = native_hpte_insert(hpteg, va,
|
||||
virt_to_abs(addr) >> PAGE_SHIFT,
|
||||
0, tmp_mode, 1, large);
|
||||
vflags, tmp_mode);
|
||||
|
||||
if (ret == -1) {
|
||||
ppc64_terminate_msg(0x20, "create_pte_mapping");
|
||||
|
@ -583,7 +583,7 @@ int hash_huge_page(struct mm_struct *mm, unsigned long access,
|
||||
pte_t *ptep;
|
||||
unsigned long va, vpn;
|
||||
pte_t old_pte, new_pte;
|
||||
unsigned long hpteflags, prpn;
|
||||
unsigned long rflags, prpn;
|
||||
long slot;
|
||||
int err = 1;
|
||||
|
||||
@ -626,9 +626,9 @@ int hash_huge_page(struct mm_struct *mm, unsigned long access,
|
||||
old_pte = *ptep;
|
||||
new_pte = old_pte;
|
||||
|
||||
hpteflags = 0x2 | (! (pte_val(new_pte) & _PAGE_RW));
|
||||
rflags = 0x2 | (! (pte_val(new_pte) & _PAGE_RW));
|
||||
/* _PAGE_EXEC -> HW_NO_EXEC since it's inverted */
|
||||
hpteflags |= ((pte_val(new_pte) & _PAGE_EXEC) ? 0 : HW_NO_EXEC);
|
||||
rflags |= ((pte_val(new_pte) & _PAGE_EXEC) ? 0 : HW_NO_EXEC);
|
||||
|
||||
/* Check if pte already has an hpte (case 2) */
|
||||
if (unlikely(pte_val(old_pte) & _PAGE_HASHPTE)) {
|
||||
@ -641,7 +641,7 @@ int hash_huge_page(struct mm_struct *mm, unsigned long access,
|
||||
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
|
||||
slot += (pte_val(old_pte) & _PAGE_GROUP_IX) >> 12;
|
||||
|
||||
if (ppc_md.hpte_updatepp(slot, hpteflags, va, 1, local) == -1)
|
||||
if (ppc_md.hpte_updatepp(slot, rflags, va, 1, local) == -1)
|
||||
pte_val(old_pte) &= ~_PAGE_HPTEFLAGS;
|
||||
}
|
||||
|
||||
@ -661,10 +661,10 @@ repeat:
|
||||
|
||||
/* Add in WIMG bits */
|
||||
/* XXX We should store these in the pte */
|
||||
hpteflags |= _PAGE_COHERENT;
|
||||
rflags |= _PAGE_COHERENT;
|
||||
|
||||
slot = ppc_md.hpte_insert(hpte_group, va, prpn, 0,
|
||||
hpteflags, 0, 1);
|
||||
slot = ppc_md.hpte_insert(hpte_group, va, prpn,
|
||||
HPTE_V_LARGE, rflags);
|
||||
|
||||
/* Primary is full, try the secondary */
|
||||
if (unlikely(slot == -1)) {
|
||||
@ -672,7 +672,7 @@ repeat:
|
||||
hpte_group = ((~hash & htab_hash_mask) *
|
||||
HPTES_PER_GROUP) & ~0x7UL;
|
||||
slot = ppc_md.hpte_insert(hpte_group, va, prpn,
|
||||
1, hpteflags, 0, 1);
|
||||
HPTE_V_LARGE, rflags);
|
||||
if (slot == -1) {
|
||||
if (mftb() & 0x1)
|
||||
hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL;
|
||||
|
@ -180,9 +180,10 @@ static int map_io_page(unsigned long ea, unsigned long pa, int flags)
|
||||
hpteg = ((hash & htab_hash_mask) * HPTES_PER_GROUP);
|
||||
|
||||
/* Panic if a pte grpup is full */
|
||||
if (ppc_md.hpte_insert(hpteg, va, pa >> PAGE_SHIFT, 0,
|
||||
_PAGE_NO_CACHE|_PAGE_GUARDED|PP_RWXX,
|
||||
1, 0) == -1) {
|
||||
if (ppc_md.hpte_insert(hpteg, va, pa >> PAGE_SHIFT,
|
||||
HPTE_V_BOLTED,
|
||||
_PAGE_NO_CACHE|_PAGE_GUARDED|PP_RWXX)
|
||||
== -1) {
|
||||
panic("map_io_page: could not insert mapping");
|
||||
}
|
||||
}
|
||||
|
@ -58,6 +58,7 @@
|
||||
#include <linux/compat.h>
|
||||
#include <linux/vfs.h>
|
||||
#include <linux/ptrace.h>
|
||||
#include <linux/fadvise.h>
|
||||
|
||||
#include <asm/types.h>
|
||||
#include <asm/ipc.h>
|
||||
@ -1043,3 +1044,40 @@ sys32_timer_create(clockid_t which_clock, struct compat_sigevent *se32,
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* 31 bit emulation wrapper functions for sys_fadvise64/fadvise64_64.
|
||||
* These need to rewrite the advise values for POSIX_FADV_{DONTNEED,NOREUSE}
|
||||
* because the 31 bit values differ from the 64 bit values.
|
||||
*/
|
||||
|
||||
asmlinkage long
|
||||
sys32_fadvise64(int fd, loff_t offset, size_t len, int advise)
|
||||
{
|
||||
if (advise == 4)
|
||||
advise = POSIX_FADV_DONTNEED;
|
||||
else if (advise == 5)
|
||||
advise = POSIX_FADV_NOREUSE;
|
||||
return sys_fadvise64(fd, offset, len, advise);
|
||||
}
|
||||
|
||||
struct fadvise64_64_args {
|
||||
int fd;
|
||||
long long offset;
|
||||
long long len;
|
||||
int advice;
|
||||
};
|
||||
|
||||
asmlinkage long
|
||||
sys32_fadvise64_64(struct fadvise64_64_args __user *args)
|
||||
{
|
||||
struct fadvise64_64_args a;
|
||||
|
||||
if ( copy_from_user(&a, args, sizeof(a)) )
|
||||
return -EFAULT;
|
||||
if (a.advice == 4)
|
||||
a.advice = POSIX_FADV_DONTNEED;
|
||||
else if (a.advice == 5)
|
||||
a.advice = POSIX_FADV_NOREUSE;
|
||||
return sys_fadvise64_64(a.fd, a.offset, a.len, a.advice);
|
||||
}
|
||||
|
@ -1251,12 +1251,12 @@ sys32_fadvise64_wrapper:
|
||||
or %r3,%r4 # get low word of 64bit loff_t
|
||||
llgfr %r4,%r5 # size_t (unsigned long)
|
||||
lgfr %r5,%r6 # int
|
||||
jg sys_fadvise64
|
||||
jg sys32_fadvise64
|
||||
|
||||
.globl sys32_fadvise64_64_wrapper
|
||||
sys32_fadvise64_64_wrapper:
|
||||
llgtr %r2,%r2 # struct fadvise64_64_args *
|
||||
jg s390_fadvise64_64
|
||||
jg sys32_fadvise64_64
|
||||
|
||||
.globl sys32_clock_settime_wrapper
|
||||
sys32_clock_settime_wrapper:
|
||||
|
@ -135,7 +135,7 @@ config UML_NET_MCAST
|
||||
|
||||
config UML_NET_PCAP
|
||||
bool "pcap transport"
|
||||
depends on UML_NET && BROKEN
|
||||
depends on UML_NET && EXPERIMENTAL
|
||||
help
|
||||
The pcap transport makes a pcap packet stream on the host look
|
||||
like an ethernet device inside UML. This is useful for making
|
||||
|
@ -51,25 +51,26 @@ MRPROPER_DIRS += $(ARCH_DIR)/include2
|
||||
endif
|
||||
SYS_DIR := $(ARCH_DIR)/include/sysdep-$(SUBARCH)
|
||||
|
||||
include $(srctree)/$(ARCH_DIR)/Makefile-$(SUBARCH)
|
||||
# -Dvmap=kernel_vmap affects everything, and prevents anything from
|
||||
# referencing the libpcap.o symbol so named.
|
||||
|
||||
core-y += $(SUBARCH_CORE)
|
||||
libs-y += $(SUBARCH_LIBS)
|
||||
CFLAGS += $(CFLAGS-y) -D__arch_um__ -DSUBARCH=\"$(SUBARCH)\" \
|
||||
$(ARCH_INCLUDE) $(MODE_INCLUDE) -Dvmap=kernel_vmap
|
||||
|
||||
USER_CFLAGS := $(patsubst -I%,,$(CFLAGS))
|
||||
USER_CFLAGS := $(patsubst -D__KERNEL__,,$(USER_CFLAGS)) $(ARCH_INCLUDE) \
|
||||
$(MODE_INCLUDE)
|
||||
|
||||
# -Derrno=kernel_errno - This turns all kernel references to errno into
|
||||
# kernel_errno to separate them from the libc errno. This allows -fno-common
|
||||
# in CFLAGS. Otherwise, it would cause ld to complain about the two different
|
||||
# errnos.
|
||||
|
||||
CFLAGS += $(CFLAGS-y) -D__arch_um__ -DSUBARCH=\"$(SUBARCH)\" \
|
||||
$(ARCH_INCLUDE) $(MODE_INCLUDE)
|
||||
|
||||
USER_CFLAGS := $(patsubst -I%,,$(CFLAGS))
|
||||
USER_CFLAGS := $(patsubst -D__KERNEL__,,$(USER_CFLAGS)) $(ARCH_INCLUDE) \
|
||||
$(MODE_INCLUDE) $(ARCH_USER_CFLAGS)
|
||||
CFLAGS += -Derrno=kernel_errno -Dsigprocmask=kernel_sigprocmask
|
||||
CFLAGS += $(call cc-option,-fno-unit-at-a-time,)
|
||||
|
||||
include $(srctree)/$(ARCH_DIR)/Makefile-$(SUBARCH)
|
||||
|
||||
#This will adjust *FLAGS accordingly to the platform.
|
||||
include $(srctree)/$(ARCH_DIR)/Makefile-os-$(OS)
|
||||
|
||||
@ -116,18 +117,19 @@ CONFIG_KERNEL_STACK_ORDER ?= 2
|
||||
STACK_SIZE := $(shell echo $$[ 4096 * (1 << $(CONFIG_KERNEL_STACK_ORDER)) ] )
|
||||
|
||||
ifndef START
|
||||
START = $$(($(TOP_ADDR) - $(SIZE)))
|
||||
START = $(shell echo $$[ $(TOP_ADDR) - $(SIZE) ] )
|
||||
endif
|
||||
|
||||
CPPFLAGS_vmlinux.lds = $(shell echo -U$(SUBARCH) \
|
||||
CPPFLAGS_vmlinux.lds = -U$(SUBARCH) \
|
||||
-DSTART=$(START) -DELF_ARCH=$(ELF_ARCH) \
|
||||
-DELF_FORMAT=\"$(ELF_FORMAT)\" $(CPP_MODE-y) \
|
||||
-DKERNEL_STACK_SIZE=$(STACK_SIZE) -DSUBARCH=$(SUBARCH))
|
||||
-DELF_FORMAT="$(ELF_FORMAT)" $(CPP_MODE-y) \
|
||||
-DKERNEL_STACK_SIZE=$(STACK_SIZE) \
|
||||
-DUNMAP_PATH=arch/um/sys-$(SUBARCH)/unmap_fin.o
|
||||
|
||||
#The wrappers will select whether using "malloc" or the kernel allocator.
|
||||
LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc
|
||||
|
||||
CFLAGS_vmlinux = $(LINK-y) $(LINK_WRAPS)
|
||||
CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS)
|
||||
define cmd_vmlinux__
|
||||
$(CC) $(CFLAGS_vmlinux) -o $@ \
|
||||
-Wl,-T,$(vmlinux-lds) $(vmlinux-init) \
|
||||
|
@ -1,4 +1,4 @@
|
||||
SUBARCH_CORE := arch/um/sys-i386/ arch/i386/crypto/
|
||||
core-y += arch/um/sys-i386/ arch/i386/crypto/
|
||||
|
||||
TOP_ADDR := $(CONFIG_TOP_ADDR)
|
||||
|
||||
@ -8,21 +8,32 @@ ifeq ($(CONFIG_MODE_SKAS),y)
|
||||
endif
|
||||
endif
|
||||
|
||||
LDFLAGS += -m elf_i386
|
||||
ELF_ARCH := $(SUBARCH)
|
||||
ELF_FORMAT := elf32-$(SUBARCH)
|
||||
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
|
||||
|
||||
ifeq ("$(origin SUBARCH)", "command line")
|
||||
ifneq ("$(shell uname -m | sed -e s/i.86/i386/)", "$(SUBARCH)")
|
||||
CFLAGS += $(call cc-option,-m32)
|
||||
USER_CFLAGS += $(call cc-option,-m32)
|
||||
HOSTCFLAGS += $(call cc-option,-m32)
|
||||
HOSTLDFLAGS += $(call cc-option,-m32)
|
||||
AFLAGS += $(call cc-option,-m32)
|
||||
LINK-y += $(call cc-option,-m32)
|
||||
UML_OBJCOPYFLAGS += -F $(ELF_FORMAT)
|
||||
|
||||
export LDFLAGS HOSTCFLAGS HOSTLDFLAGS UML_OBJCOPYFLAGS
|
||||
endif
|
||||
endif
|
||||
|
||||
CFLAGS += -U__$(SUBARCH)__ -U$(SUBARCH) $(STUB_CFLAGS)
|
||||
ARCH_USER_CFLAGS :=
|
||||
|
||||
ifneq ($(CONFIG_GPROF),y)
|
||||
ARCH_CFLAGS += -DUM_FASTCALL
|
||||
endif
|
||||
|
||||
ELF_ARCH := $(SUBARCH)
|
||||
ELF_FORMAT := elf32-$(SUBARCH)
|
||||
|
||||
OBJCOPYFLAGS := -O binary -R .note -R .comment -S
|
||||
|
||||
SYS_UTIL_DIR := $(ARCH_DIR)/sys-i386/util
|
||||
|
||||
SYS_HEADERS := $(SYS_DIR)/sc.h $(SYS_DIR)/thread.h
|
||||
SYS_HEADERS := $(SYS_DIR)/sc.h $(SYS_DIR)/thread.h
|
||||
|
||||
prepare: $(SYS_HEADERS)
|
||||
|
||||
|
@ -1,11 +1,13 @@
|
||||
# Copyright 2003 - 2004 Pathscale, Inc
|
||||
# Released under the GPL
|
||||
|
||||
SUBARCH_LIBS := arch/um/sys-x86_64/
|
||||
libs-y += arch/um/sys-x86_64/
|
||||
START := 0x60000000
|
||||
|
||||
#We #undef __x86_64__ for kernelspace, not for userspace where
|
||||
#it's needed for headers to work!
|
||||
CFLAGS += -U__$(SUBARCH)__ -fno-builtin $(STUB_CFLAGS)
|
||||
ARCH_USER_CFLAGS := -D__x86_64__
|
||||
USER_CFLAGS += -fno-builtin
|
||||
|
||||
ELF_ARCH := i386:x86-64
|
||||
ELF_FORMAT := elf64-x86-64
|
||||
|
@ -10,7 +10,6 @@ slip-objs := slip_kern.o slip_user.o
|
||||
slirp-objs := slirp_kern.o slirp_user.o
|
||||
daemon-objs := daemon_kern.o daemon_user.o
|
||||
mcast-objs := mcast_kern.o mcast_user.o
|
||||
#pcap-objs := pcap_kern.o pcap_user.o $(PCAP)
|
||||
net-objs := net_kern.o net_user.o
|
||||
mconsole-objs := mconsole_kern.o mconsole_user.o
|
||||
hostaudio-objs := hostaudio_kern.o
|
||||
@ -18,6 +17,17 @@ ubd-objs := ubd_kern.o ubd_user.o
|
||||
port-objs := port_kern.o port_user.o
|
||||
harddog-objs := harddog_kern.o harddog_user.o
|
||||
|
||||
LDFLAGS_pcap.o := -r $(shell $(CC) $(CFLAGS) -print-file-name=libpcap.a)
|
||||
|
||||
$(obj)/pcap.o: $(obj)/pcap_kern.o $(obj)/pcap_user.o
|
||||
$(LD) -r -dp -o $@ $^ $(LDFLAGS) $(LDFLAGS_pcap.o)
|
||||
#XXX: The call below does not work because the flags are added before the
|
||||
# object name, so nothing from the library gets linked.
|
||||
#$(call if_changed,ld)
|
||||
|
||||
# When the above is fixed, don't forget to add this too!
|
||||
#targets := $(obj)/pcap.o
|
||||
|
||||
obj-y := stdio_console.o fd.o chan_kern.o chan_user.o line.o
|
||||
obj-$(CONFIG_SSL) += ssl.o
|
||||
obj-$(CONFIG_STDERR_CONSOLE) += stderr_console.o
|
||||
@ -26,7 +36,7 @@ obj-$(CONFIG_UML_NET_SLIP) += slip.o slip_common.o
|
||||
obj-$(CONFIG_UML_NET_SLIRP) += slirp.o slip_common.o
|
||||
obj-$(CONFIG_UML_NET_DAEMON) += daemon.o
|
||||
obj-$(CONFIG_UML_NET_MCAST) += mcast.o
|
||||
#obj-$(CONFIG_UML_NET_PCAP) += pcap.o $(PCAP)
|
||||
obj-$(CONFIG_UML_NET_PCAP) += pcap.o
|
||||
obj-$(CONFIG_UML_NET) += net.o
|
||||
obj-$(CONFIG_MCONSOLE) += mconsole.o
|
||||
obj-$(CONFIG_MMAPPER) += mmapper_kern.o
|
||||
@ -41,6 +51,7 @@ obj-$(CONFIG_UML_WATCHDOG) += harddog.o
|
||||
obj-$(CONFIG_BLK_DEV_COW_COMMON) += cow_user.o
|
||||
obj-$(CONFIG_UML_RANDOM) += random.o
|
||||
|
||||
USER_OBJS := fd.o null.o pty.o tty.o xterm.o slip_common.o
|
||||
# pcap_user.o must be added explicitly.
|
||||
USER_OBJS := fd.o null.o pty.o tty.o xterm.o slip_common.o pcap_user.o
|
||||
|
||||
include arch/um/scripts/Makefile.rules
|
||||
|
@ -16,8 +16,8 @@ SECTIONS
|
||||
__binary_start = .;
|
||||
|
||||
#ifdef MODE_TT
|
||||
.remap_data : { arch/um/sys-SUBARCH/unmap_fin.o (.data .bss) }
|
||||
.remap : { arch/um/sys-SUBARCH/unmap_fin.o (.text) }
|
||||
.remap_data : { UNMAP_PATH (.data .bss) }
|
||||
.remap : { UNMAP_PATH (.text) }
|
||||
|
||||
. = ALIGN(4096); /* Init code and data */
|
||||
#endif
|
||||
|
@ -12,8 +12,8 @@ $(obj)/unmap.o: _c_flags = $(call unprofile,$(CFLAGS))
|
||||
|
||||
quiet_cmd_wrapld = LD $@
|
||||
define cmd_wrapld
|
||||
$(LD) -r -o $(obj)/unmap_tmp.o $< $(shell $(CC) -print-file-name=libc.a); \
|
||||
$(OBJCOPY) $(obj)/unmap_tmp.o $@ -G switcheroo
|
||||
$(LD) $(LDFLAGS) -r -o $(obj)/unmap_tmp.o $< $(shell $(CC) $(CFLAGS) -print-file-name=libc.a); \
|
||||
$(OBJCOPY) $(UML_OBJCOPYFLAGS) $(obj)/unmap_tmp.o $@ -G switcheroo
|
||||
endef
|
||||
|
||||
$(obj)/unmap_fin.o : $(obj)/unmap.o FORCE
|
||||
|
@ -4,96 +4,106 @@
|
||||
*/
|
||||
|
||||
#include "linux/config.h"
|
||||
#include "linux/sched.h"
|
||||
#include "linux/slab.h"
|
||||
#include "linux/types.h"
|
||||
#include "asm/uaccess.h"
|
||||
#include "asm/ptrace.h"
|
||||
#include "asm/smp.h"
|
||||
#include "asm/ldt.h"
|
||||
#include "choose-mode.h"
|
||||
#include "kern.h"
|
||||
#include "mode_kern.h"
|
||||
|
||||
#ifdef CONFIG_MODE_TT
|
||||
|
||||
extern int modify_ldt(int func, void *ptr, unsigned long bytecount);
|
||||
|
||||
/* XXX this needs copy_to_user and copy_from_user */
|
||||
|
||||
int sys_modify_ldt_tt(int func, void __user *ptr, unsigned long bytecount)
|
||||
static int do_modify_ldt_tt(int func, void *ptr, unsigned long bytecount)
|
||||
{
|
||||
if (!access_ok(VERIFY_READ, ptr, bytecount))
|
||||
return -EFAULT;
|
||||
|
||||
return modify_ldt(func, ptr, bytecount);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_MODE_SKAS
|
||||
extern int userspace_pid[];
|
||||
|
||||
#include "skas.h"
|
||||
#include "skas_ptrace.h"
|
||||
|
||||
int sys_modify_ldt_skas(int func, void __user *ptr, unsigned long bytecount)
|
||||
static int do_modify_ldt_skas(int func, void *ptr, unsigned long bytecount)
|
||||
{
|
||||
struct ptrace_ldt ldt;
|
||||
void *buf;
|
||||
int res, n;
|
||||
u32 cpu;
|
||||
int res;
|
||||
|
||||
buf = kmalloc(bytecount, GFP_KERNEL);
|
||||
if(buf == NULL)
|
||||
return(-ENOMEM);
|
||||
ldt = ((struct ptrace_ldt) { .func = func,
|
||||
.ptr = ptr,
|
||||
.bytecount = bytecount });
|
||||
|
||||
res = 0;
|
||||
cpu = get_cpu();
|
||||
res = ptrace(PTRACE_LDT, userspace_pid[cpu], 0, (unsigned long) &ldt);
|
||||
put_cpu();
|
||||
|
||||
return res;
|
||||
}
|
||||
#endif
|
||||
|
||||
int sys_modify_ldt(int func, void __user *ptr, unsigned long bytecount)
|
||||
{
|
||||
struct user_desc info;
|
||||
int res = 0;
|
||||
void *buf = NULL;
|
||||
void *p = NULL; /* What we pass to host. */
|
||||
|
||||
switch(func){
|
||||
case 1:
|
||||
case 0x11:
|
||||
res = copy_from_user(buf, ptr, bytecount);
|
||||
break;
|
||||
}
|
||||
case 0x11: /* write_ldt */
|
||||
/* Do this check now to avoid overflows. */
|
||||
if (bytecount != sizeof(struct user_desc)) {
|
||||
res = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if(res != 0){
|
||||
res = -EFAULT;
|
||||
if(copy_from_user(&info, ptr, sizeof(info))) {
|
||||
res = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
p = &info;
|
||||
break;
|
||||
case 0:
|
||||
case 2: /* read_ldt */
|
||||
|
||||
/* The use of info avoids kmalloc on the write case, not on the
|
||||
* read one. */
|
||||
buf = kmalloc(bytecount, GFP_KERNEL);
|
||||
if (!buf) {
|
||||
res = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
p = buf;
|
||||
default:
|
||||
res = -ENOSYS;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ldt = ((struct ptrace_ldt) { .func = func,
|
||||
.ptr = buf,
|
||||
.bytecount = bytecount });
|
||||
#warning Need to look up userspace_pid by cpu
|
||||
res = ptrace(PTRACE_LDT, userspace_pid[0], 0, (unsigned long) &ldt);
|
||||
res = CHOOSE_MODE_PROC(do_modify_ldt_tt, do_modify_ldt_skas, func,
|
||||
p, bytecount);
|
||||
if(res < 0)
|
||||
goto out;
|
||||
|
||||
switch(func){
|
||||
case 0:
|
||||
case 2:
|
||||
n = res;
|
||||
res = copy_to_user(ptr, buf, n);
|
||||
if(res != 0)
|
||||
/* Modify_ldt was for reading and returned the number of read
|
||||
* bytes.*/
|
||||
if(copy_to_user(ptr, p, res))
|
||||
res = -EFAULT;
|
||||
else
|
||||
res = n;
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
out:
|
||||
kfree(buf);
|
||||
return(res);
|
||||
return res;
|
||||
}
|
||||
#endif
|
||||
|
||||
int sys_modify_ldt(int func, void __user *ptr, unsigned long bytecount)
|
||||
{
|
||||
return(CHOOSE_MODE_PROC(sys_modify_ldt_tt, sys_modify_ldt_skas, func,
|
||||
ptr, bytecount));
|
||||
}
|
||||
|
||||
|
||||
|
||||
/*
|
||||
* Overrides for Emacs so that we follow Linus's tabbing style.
|
||||
* Emacs will notice this stuff at the end of the file and automatically
|
||||
* adjust the settings for this buffer only. This must remain at the end
|
||||
* of the file.
|
||||
* ---------------------------------------------------------------------------
|
||||
* Local variables:
|
||||
* c-file-style: "linux"
|
||||
* End:
|
||||
*/
|
||||
|
@ -15,7 +15,7 @@ int switcheroo(int fd, int prot, void *from, void *to, int size)
|
||||
if(munmap(to, size) < 0){
|
||||
return(-1);
|
||||
}
|
||||
if(mmap2(to, size, prot, MAP_SHARED | MAP_FIXED, fd, 0) != to){
|
||||
if(mmap2(to, size, prot, MAP_SHARED | MAP_FIXED, fd, 0) == (void*) -1 ){
|
||||
return(-1);
|
||||
}
|
||||
if(munmap(from, size) < 0){
|
||||
|
@ -168,7 +168,7 @@ int setup_signal_stack_si(unsigned long stack_top, int sig,
|
||||
|
||||
frame = (struct rt_sigframe __user *)
|
||||
round_down(stack_top - sizeof(struct rt_sigframe), 16) - 8;
|
||||
((unsigned char *) frame) -= 128;
|
||||
frame = (struct rt_sigframe *) ((unsigned long) frame - 128);
|
||||
|
||||
if (!access_ok(VERIFY_WRITE, fp, sizeof(struct _fpstate)))
|
||||
goto out;
|
||||
|
@ -15,7 +15,7 @@ int switcheroo(int fd, int prot, void *from, void *to, int size)
|
||||
if(munmap(to, size) < 0){
|
||||
return(-1);
|
||||
}
|
||||
if(mmap(to, size, prot, MAP_SHARED | MAP_FIXED, fd, 0) != to){
|
||||
if(mmap(to, size, prot, MAP_SHARED | MAP_FIXED, fd, 0) == (void*) -1){
|
||||
return(-1);
|
||||
}
|
||||
if(munmap(from, size) < 0){
|
||||
|
@ -1,8 +1,8 @@
|
||||
/*
|
||||
* arch/v850/vmlinux.lds.S -- kernel linker script for v850 platforms
|
||||
*
|
||||
* Copyright (C) 2002,03,04 NEC Electronics Corporation
|
||||
* Copyright (C) 2002,03,04 Miles Bader <miles@gnu.org>
|
||||
* Copyright (C) 2002,03,04,05 NEC Electronics Corporation
|
||||
* Copyright (C) 2002,03,04,05 Miles Bader <miles@gnu.org>
|
||||
*
|
||||
* This file is subject to the terms and conditions of the GNU General
|
||||
* Public License. See the file COPYING in the main directory of this
|
||||
@ -61,6 +61,7 @@
|
||||
*(__kcrctab_gpl) \
|
||||
___stop___kcrctab_gpl = .; \
|
||||
/* Built-in module parameters */ \
|
||||
. = ALIGN (4) ; \
|
||||
___start___param = .; \
|
||||
*(__param) \
|
||||
___stop___param = .;
|
||||
|
@ -57,6 +57,7 @@ int syscall32_setup_pages(struct linux_binprm *bprm, int exstack)
|
||||
int npages = (VSYSCALL32_END - VSYSCALL32_BASE) >> PAGE_SHIFT;
|
||||
struct vm_area_struct *vma;
|
||||
struct mm_struct *mm = current->mm;
|
||||
int ret;
|
||||
|
||||
vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
|
||||
if (!vma)
|
||||
@ -78,7 +79,11 @@ int syscall32_setup_pages(struct linux_binprm *bprm, int exstack)
|
||||
vma->vm_mm = mm;
|
||||
|
||||
down_write(&mm->mmap_sem);
|
||||
insert_vm_struct(mm, vma);
|
||||
if ((ret = insert_vm_struct(mm, vma))) {
|
||||
up_write(&mm->mmap_sem);
|
||||
kmem_cache_free(vm_area_cachep, vma);
|
||||
return ret;
|
||||
}
|
||||
mm->total_vm += npages;
|
||||
up_write(&mm->mmap_sem);
|
||||
return 0;
|
||||
|
@ -355,7 +355,7 @@ static void rp_do_receive(struct r_port *info,
|
||||
ToRecv = space;
|
||||
|
||||
if (ToRecv <= 0)
|
||||
return;
|
||||
goto done;
|
||||
|
||||
/*
|
||||
* if status indicates there are errored characters in the
|
||||
@ -437,6 +437,7 @@ static void rp_do_receive(struct r_port *info,
|
||||
}
|
||||
/* Push the data up to the tty layer */
|
||||
ld->receive_buf(tty, tty->flip.char_buf, tty->flip.flag_buf, count);
|
||||
done:
|
||||
tty_ldisc_deref(ld);
|
||||
}
|
||||
|
||||
|
@ -2796,7 +2796,7 @@ void do_blank_screen(int entering_gfx)
|
||||
return;
|
||||
|
||||
if (vesa_off_interval) {
|
||||
blank_state = blank_vesa_wait,
|
||||
blank_state = blank_vesa_wait;
|
||||
mod_timer(&console_timer, jiffies + vesa_off_interval);
|
||||
}
|
||||
|
||||
|
@ -52,6 +52,8 @@ struct pcdp_uart {
|
||||
u32 clock_rate;
|
||||
u8 pci_prog_intfc;
|
||||
u8 flags;
|
||||
u16 conout_index;
|
||||
u32 reserved;
|
||||
} __attribute__((packed));
|
||||
|
||||
#define PCDP_IF_PCI 1
|
||||
|
@ -1084,7 +1084,8 @@ static int ohci_devctl(struct hpsb_host *host, enum devctl_cmd cmd, int arg)
|
||||
|
||||
initialize_dma_rcv_ctx(&ohci->ir_legacy_context, 1);
|
||||
|
||||
PRINT(KERN_ERR, "IR legacy activated");
|
||||
if (printk_ratelimit())
|
||||
PRINT(KERN_ERR, "IR legacy activated");
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&ohci->IR_channel_lock, flags);
|
||||
|
@ -105,7 +105,7 @@ out:
|
||||
|
||||
static void amijoy_close(struct input_dev *dev)
|
||||
{
|
||||
down(&amijoysem);
|
||||
down(&amijoy_sem);
|
||||
if (!--amijoy_used)
|
||||
free_irq(IRQ_AMIGA_VERTB, amijoy_interrupt);
|
||||
up(&amijoy_sem);
|
||||
|
@ -1345,7 +1345,8 @@ void bitmap_endwrite(struct bitmap *bitmap, sector_t offset, unsigned long secto
|
||||
}
|
||||
}
|
||||
|
||||
int bitmap_start_sync(struct bitmap *bitmap, sector_t offset, int *blocks)
|
||||
int bitmap_start_sync(struct bitmap *bitmap, sector_t offset, int *blocks,
|
||||
int degraded)
|
||||
{
|
||||
bitmap_counter_t *bmc;
|
||||
int rv;
|
||||
@ -1362,8 +1363,10 @@ int bitmap_start_sync(struct bitmap *bitmap, sector_t offset, int *blocks)
|
||||
rv = 1;
|
||||
else if (NEEDED(*bmc)) {
|
||||
rv = 1;
|
||||
*bmc |= RESYNC_MASK;
|
||||
*bmc &= ~NEEDED_MASK;
|
||||
if (!degraded) { /* don't set/clear bits if degraded */
|
||||
*bmc |= RESYNC_MASK;
|
||||
*bmc &= ~NEEDED_MASK;
|
||||
}
|
||||
}
|
||||
}
|
||||
spin_unlock_irq(&bitmap->lock);
|
||||
|
@ -314,16 +314,16 @@ static int raid0_run (mddev_t *mddev)
|
||||
sector_t space = conf->hash_spacing;
|
||||
int round;
|
||||
conf->preshift = 0;
|
||||
if (sizeof(sector_t) > sizeof(unsigned long)) {
|
||||
if (sizeof(sector_t) > sizeof(u32)) {
|
||||
/*shift down space and s so that sector_div will work */
|
||||
while (space > (sector_t) (~(unsigned long)0)) {
|
||||
while (space > (sector_t) (~(u32)0)) {
|
||||
s >>= 1;
|
||||
space >>= 1;
|
||||
s += 1; /* force round-up */
|
||||
conf->preshift++;
|
||||
}
|
||||
}
|
||||
round = sector_div(s, (unsigned long)space) ? 1 : 0;
|
||||
round = sector_div(s, (u32)space) ? 1 : 0;
|
||||
nb_zone = s + round;
|
||||
}
|
||||
printk("raid0 : nb_zone is %d.\n", nb_zone);
|
||||
@ -443,7 +443,7 @@ static int raid0_make_request (request_queue_t *q, struct bio *bio)
|
||||
volatile
|
||||
#endif
|
||||
sector_t x = block >> conf->preshift;
|
||||
sector_div(x, (unsigned long)conf->hash_spacing);
|
||||
sector_div(x, (u32)conf->hash_spacing);
|
||||
zone = conf->hash_table[x];
|
||||
}
|
||||
|
||||
|
@ -1126,21 +1126,19 @@ static sector_t sync_request(mddev_t *mddev, sector_t sector_nr, int *skipped, i
|
||||
* only be one in raid1 resync.
|
||||
* We can find the current addess in mddev->curr_resync
|
||||
*/
|
||||
if (!conf->fullsync) {
|
||||
if (mddev->curr_resync < max_sector)
|
||||
bitmap_end_sync(mddev->bitmap,
|
||||
mddev->curr_resync,
|
||||
if (mddev->curr_resync < max_sector) /* aborted */
|
||||
bitmap_end_sync(mddev->bitmap, mddev->curr_resync,
|
||||
&sync_blocks, 1);
|
||||
bitmap_close_sync(mddev->bitmap);
|
||||
}
|
||||
if (mddev->curr_resync >= max_sector)
|
||||
else /* completed sync */
|
||||
conf->fullsync = 0;
|
||||
|
||||
bitmap_close_sync(mddev->bitmap);
|
||||
close_sync(conf);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!conf->fullsync &&
|
||||
!bitmap_start_sync(mddev->bitmap, sector_nr, &sync_blocks)) {
|
||||
if (!bitmap_start_sync(mddev->bitmap, sector_nr, &sync_blocks, mddev->degraded) &&
|
||||
!conf->fullsync) {
|
||||
/* We can skip this block, and probably several more */
|
||||
*skipped = 1;
|
||||
return sync_blocks;
|
||||
@ -1243,15 +1241,15 @@ static sector_t sync_request(mddev_t *mddev, sector_t sector_nr, int *skipped, i
|
||||
len = (max_sector - sector_nr) << 9;
|
||||
if (len == 0)
|
||||
break;
|
||||
if (!conf->fullsync) {
|
||||
if (sync_blocks == 0) {
|
||||
if (!bitmap_start_sync(mddev->bitmap,
|
||||
sector_nr, &sync_blocks))
|
||||
break;
|
||||
if (sync_blocks < (PAGE_SIZE>>9))
|
||||
BUG();
|
||||
if (len > (sync_blocks<<9)) len = sync_blocks<<9;
|
||||
}
|
||||
if (sync_blocks == 0) {
|
||||
if (!bitmap_start_sync(mddev->bitmap, sector_nr,
|
||||
&sync_blocks, mddev->degraded) &&
|
||||
!conf->fullsync)
|
||||
break;
|
||||
if (sync_blocks < (PAGE_SIZE>>9))
|
||||
BUG();
|
||||
if (len > (sync_blocks<<9))
|
||||
len = sync_blocks<<9;
|
||||
}
|
||||
|
||||
for (i=0 ; i < conf->raid_disks; i++) {
|
||||
@ -1264,7 +1262,8 @@ static sector_t sync_request(mddev_t *mddev, sector_t sector_nr, int *skipped, i
|
||||
while (i > 0) {
|
||||
i--;
|
||||
bio = r1_bio->bios[i];
|
||||
if (bio->bi_end_io==NULL) continue;
|
||||
if (bio->bi_end_io==NULL)
|
||||
continue;
|
||||
/* remove last page from this bio */
|
||||
bio->bi_vcnt--;
|
||||
bio->bi_size -= len;
|
||||
|
@ -217,13 +217,11 @@ static int lgdt3302_set_parameters(struct dvb_frontend* fe,
|
||||
static u8 demux_ctrl_cfg[] = { DEMUX_CONTROL, 0xfb };
|
||||
static u8 agc_rf_cfg[] = { AGC_RF_BANDWIDTH0, 0x40, 0x93, 0x00 };
|
||||
static u8 agc_ctrl_cfg[] = { AGC_FUNC_CTRL2, 0xc6, 0x40 };
|
||||
static u8 agc_delay_cfg[] = { AGC_DELAY0, 0x00, 0x00, 0x00 };
|
||||
static u8 agc_delay_cfg[] = { AGC_DELAY0, 0x07, 0x00, 0xfe };
|
||||
static u8 agc_loop_cfg[] = { AGC_LOOP_BANDWIDTH0, 0x08, 0x9a };
|
||||
|
||||
/* Change only if we are actually changing the modulation */
|
||||
if (state->current_modulation != param->u.vsb.modulation) {
|
||||
int value;
|
||||
|
||||
switch(param->u.vsb.modulation) {
|
||||
case VSB_8:
|
||||
dprintk("%s: VSB_8 MODE\n", __FUNCTION__);
|
||||
@ -276,16 +274,8 @@ static int lgdt3302_set_parameters(struct dvb_frontend* fe,
|
||||
recovery center frequency register */
|
||||
i2c_writebytes(state, state->config->demod_address,
|
||||
vsb_freq_cfg, sizeof(vsb_freq_cfg));
|
||||
/* Set the value of 'INLVTHD' register 0x2a/0x2c
|
||||
to value from 'IFACC' register 0x39/0x3b -1 */
|
||||
i2c_selectreadbytes(state, AGC_RFIF_ACC0,
|
||||
&agc_delay_cfg[1], 3);
|
||||
value = ((agc_delay_cfg[1] & 0x0f) << 8) | agc_delay_cfg[3];
|
||||
value = value -1;
|
||||
dprintk("%s IFACC -1 = 0x%03x\n", __FUNCTION__, value);
|
||||
agc_delay_cfg[1] = (value >> 8) & 0x0f;
|
||||
agc_delay_cfg[2] = 0x00;
|
||||
agc_delay_cfg[3] = value & 0xff;
|
||||
|
||||
/* Set the value of 'INLVTHD' register 0x2a/0x2c to 0x7fe */
|
||||
i2c_writebytes(state, state->config->demod_address,
|
||||
agc_delay_cfg, sizeof(agc_delay_cfg));
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* $Id: cx88-cards.c,v 1.85 2005/07/04 19:35:05 mkrufky Exp $
|
||||
* $Id: cx88-cards.c,v 1.86 2005/07/14 03:06:43 mchehab Exp $
|
||||
*
|
||||
* device driver for Conexant 2388x based TV cards
|
||||
* card-specific stuff.
|
||||
@ -682,9 +682,9 @@ struct cx88_board cx88_boards[] = {
|
||||
.name = "PixelView PlayTV Ultra Pro (Stereo)",
|
||||
/* May be also TUNER_YMEC_TVF_5533MF for NTSC/M or PAL/M */
|
||||
.tuner_type = TUNER_PHILIPS_FM1216ME_MK3,
|
||||
.radio_type = TUNER_TEA5767,
|
||||
.tuner_addr = 0xc2>>1,
|
||||
.radio_addr = 0xc0>>1,
|
||||
.radio_type = UNSET,
|
||||
.tuner_addr = ADDR_UNSET,
|
||||
.radio_addr = ADDR_UNSET,
|
||||
.input = {{
|
||||
.type = CX88_VMUX_TELEVISION,
|
||||
.vmux = 0,
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* $Id: cx88-dvb.c,v 1.41 2005/07/04 19:35:05 mkrufky Exp $
|
||||
* $Id: cx88-dvb.c,v 1.42 2005/07/12 15:44:55 mkrufky Exp $
|
||||
*
|
||||
* device driver for Conexant 2388x based TV cards
|
||||
* MPEG Transport Stream (DVB) routines
|
||||
@ -180,12 +180,14 @@ static struct mt352_config dntv_live_dvbt_config = {
|
||||
#if CONFIG_DVB_CX22702
|
||||
static struct cx22702_config connexant_refboard_config = {
|
||||
.demod_address = 0x43,
|
||||
.output_mode = CX22702_SERIAL_OUTPUT,
|
||||
.pll_address = 0x60,
|
||||
.pll_desc = &dvb_pll_thomson_dtt7579,
|
||||
};
|
||||
|
||||
static struct cx22702_config hauppauge_novat_config = {
|
||||
.demod_address = 0x43,
|
||||
.output_mode = CX22702_SERIAL_OUTPUT,
|
||||
.pll_address = 0x61,
|
||||
.pll_desc = &dvb_pll_thomson_dtt759x,
|
||||
};
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* $Id: cx88-video.c,v 1.79 2005/07/07 14:17:47 mchehab Exp $
|
||||
* $Id: cx88-video.c,v 1.80 2005/07/13 08:49:08 mchehab Exp $
|
||||
*
|
||||
* device driver for Conexant 2388x based TV cards
|
||||
* video4linux video interface
|
||||
@ -1346,6 +1346,11 @@ static int video_do_ioctl(struct inode *inode, struct file *file,
|
||||
dev->freq = f->frequency;
|
||||
cx88_newstation(core);
|
||||
cx88_call_i2c_clients(dev->core,VIDIOC_S_FREQUENCY,f);
|
||||
|
||||
/* When changing channels it is required to reset TVAUDIO */
|
||||
msleep (10);
|
||||
cx88_set_tvaudio(core);
|
||||
|
||||
up(&dev->lock);
|
||||
return 0;
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* $Id: cx88.h,v 1.68 2005/07/07 14:17:47 mchehab Exp $
|
||||
* $Id: cx88.h,v 1.69 2005/07/13 17:25:25 mchehab Exp $
|
||||
*
|
||||
* v4l2 device driver for cx2388x based TV cards
|
||||
*
|
||||
@ -35,8 +35,8 @@
|
||||
#include "btcx-risc.h"
|
||||
#include "cx88-reg.h"
|
||||
|
||||
#include <linux/version.h>
|
||||
#define CX88_VERSION_CODE KERNEL_VERSION(0,0,4)
|
||||
#include <linux/utsname.h>
|
||||
#define CX88_VERSION_CODE KERNEL_VERSION(0,0,5)
|
||||
|
||||
#ifndef TRUE
|
||||
# define TRUE (1==1)
|
||||
|
@ -2,7 +2,7 @@
|
||||
* For Philips TEA5767 FM Chip used on some TV Cards like Prolink Pixelview
|
||||
* I2C address is allways 0xC0.
|
||||
*
|
||||
* $Id: tea5767.c,v 1.18 2005/07/07 03:02:55 mchehab Exp $
|
||||
* $Id: tea5767.c,v 1.21 2005/07/14 03:06:43 mchehab Exp $
|
||||
*
|
||||
* Copyright (c) 2005 Mauro Carvalho Chehab (mchehab@brturbo.com.br)
|
||||
* This code is placed under the terms of the GNU General Public License
|
||||
@ -153,17 +153,17 @@ static void tea5767_status_dump(unsigned char *buffer)
|
||||
|
||||
switch (TEA5767_HIGH_LO_32768) {
|
||||
case TEA5767_HIGH_LO_13MHz:
|
||||
frq = 1000 * (div * 50 - 700 - 225) / 4; /* Freq in KHz */
|
||||
frq = (div * 50000 - 700000 - 225000) / 4; /* Freq in KHz */
|
||||
break;
|
||||
case TEA5767_LOW_LO_13MHz:
|
||||
frq = 1000 * (div * 50 + 700 + 225) / 4; /* Freq in KHz */
|
||||
frq = (div * 50000 + 700000 + 225000) / 4; /* Freq in KHz */
|
||||
break;
|
||||
case TEA5767_LOW_LO_32768:
|
||||
frq = 1000 * (div * 32768 / 1000 + 700 + 225) / 4; /* Freq in KHz */
|
||||
frq = (div * 32768 + 700000 + 225000) / 4; /* Freq in KHz */
|
||||
break;
|
||||
case TEA5767_HIGH_LO_32768:
|
||||
default:
|
||||
frq = 1000 * (div * 32768 / 1000 - 700 - 225) / 4; /* Freq in KHz */
|
||||
frq = (div * 32768 - 700000 - 225000) / 4; /* Freq in KHz */
|
||||
break;
|
||||
}
|
||||
buffer[0] = (div >> 8) & 0x3f;
|
||||
@ -196,7 +196,7 @@ static void set_radio_freq(struct i2c_client *c, unsigned int frq)
|
||||
unsigned div;
|
||||
int rc;
|
||||
|
||||
tuner_dbg (PREFIX "radio freq counter %d\n", frq);
|
||||
tuner_dbg (PREFIX "radio freq = %d.%03d MHz\n", frq/16000,(frq/16)%1000);
|
||||
|
||||
/* Rounds freq to next decimal value - for 62.5 KHz step */
|
||||
/* frq = 20*(frq/16)+radio_frq[frq%16]; */
|
||||
@ -224,19 +224,19 @@ static void set_radio_freq(struct i2c_client *c, unsigned int frq)
|
||||
tuner_dbg ("TEA5767 radio HIGH LO inject xtal @ 13 MHz\n");
|
||||
buffer[2] |= TEA5767_HIGH_LO_INJECT;
|
||||
buffer[4] |= TEA5767_PLLREF_ENABLE;
|
||||
div = (frq * 4 / 16 + 700 + 225 + 25) / 50;
|
||||
div = (frq * 4000 / 16 + 700000 + 225000 + 25000) / 50000;
|
||||
break;
|
||||
case TEA5767_LOW_LO_13MHz:
|
||||
tuner_dbg ("TEA5767 radio LOW LO inject xtal @ 13 MHz\n");
|
||||
|
||||
buffer[4] |= TEA5767_PLLREF_ENABLE;
|
||||
div = (frq * 4 / 16 - 700 - 225 + 25) / 50;
|
||||
div = (frq * 4000 / 16 - 700000 - 225000 + 25000) / 50000;
|
||||
break;
|
||||
case TEA5767_LOW_LO_32768:
|
||||
tuner_dbg ("TEA5767 radio LOW LO inject xtal @ 32,768 MHz\n");
|
||||
buffer[3] |= TEA5767_XTAL_32768;
|
||||
/* const 700=4000*175 Khz - to adjust freq to right value */
|
||||
div = (1000 * (frq * 4 / 16 - 700 - 225) + 16384) >> 15;
|
||||
div = ((frq * 4000 / 16 - 700000 - 225000) + 16384) >> 15;
|
||||
break;
|
||||
case TEA5767_HIGH_LO_32768:
|
||||
default:
|
||||
@ -244,17 +244,21 @@ static void set_radio_freq(struct i2c_client *c, unsigned int frq)
|
||||
|
||||
buffer[2] |= TEA5767_HIGH_LO_INJECT;
|
||||
buffer[3] |= TEA5767_XTAL_32768;
|
||||
div = (1000 * (frq * 4 / 16 + 700 + 225) + 16384) >> 15;
|
||||
div = ((frq * (4000 / 16) + 700000 + 225000) + 16384) >> 15;
|
||||
break;
|
||||
}
|
||||
buffer[0] = (div >> 8) & 0x3f;
|
||||
buffer[1] = div & 0xff;
|
||||
|
||||
if (tuner_debug)
|
||||
tea5767_status_dump(buffer);
|
||||
|
||||
if (5 != (rc = i2c_master_send(c, buffer, 5)))
|
||||
tuner_warn("i2c i/o error: rc == %d (should be 5)\n", rc);
|
||||
|
||||
if (tuner_debug) {
|
||||
if (5 != (rc = i2c_master_recv(c, buffer, 5)))
|
||||
tuner_warn("i2c i/o error: rc == %d (should be 5)\n", rc);
|
||||
else
|
||||
tea5767_status_dump(buffer);
|
||||
}
|
||||
}
|
||||
|
||||
static int tea5767_signal(struct i2c_client *c)
|
||||
@ -294,7 +298,7 @@ int tea5767_autodetection(struct i2c_client *c)
|
||||
struct tuner *t = i2c_get_clientdata(c);
|
||||
|
||||
if (5 != (rc = i2c_master_recv(c, buffer, 5))) {
|
||||
tuner_warn("it is not a TEA5767. Received %i chars.\n", rc);
|
||||
tuner_warn("It is not a TEA5767. Received %i bytes.\n", rc);
|
||||
return EINVAL;
|
||||
}
|
||||
|
||||
@ -310,11 +314,11 @@ int tea5767_autodetection(struct i2c_client *c)
|
||||
* bit 0 : internally set to 0
|
||||
* Byte 5: bit 7:0 : == 0
|
||||
*/
|
||||
|
||||
if (!((buffer[3] & 0x0f) == 0x00) && (buffer[4] == 0x00)) {
|
||||
tuner_warn("Chip ID is not zero. It is not a TEA5767\n");
|
||||
return EINVAL;
|
||||
}
|
||||
|
||||
tuner_warn("TEA5767 detected.\n");
|
||||
return 0;
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* $Id: tuner-core.c,v 1.55 2005/07/08 13:20:33 mchehab Exp $
|
||||
* $Id: tuner-core.c,v 1.58 2005/07/14 03:06:43 mchehab Exp $
|
||||
*
|
||||
* i2c tv tuner chip device driver
|
||||
* core core, i.e. kernel interfaces, registering and so on
|
||||
@ -39,6 +39,9 @@ I2C_CLIENT_INSMOD;
|
||||
static unsigned int addr = 0;
|
||||
module_param(addr, int, 0444);
|
||||
|
||||
static unsigned int no_autodetect = 0;
|
||||
module_param(no_autodetect, int, 0444);
|
||||
|
||||
/* insmod options used at runtime => read/write */
|
||||
unsigned int tuner_debug = 0;
|
||||
module_param(tuner_debug, int, 0644);
|
||||
@ -318,17 +321,19 @@ static int tuner_attach(struct i2c_adapter *adap, int addr, int kind)
|
||||
tuner_info("chip found @ 0x%x (%s)\n", addr << 1, adap->name);
|
||||
|
||||
/* TEA5767 autodetection code - only for addr = 0xc0 */
|
||||
if (addr == 0x60) {
|
||||
if (tea5767_autodetection(&t->i2c) != EINVAL) {
|
||||
t->type = TUNER_TEA5767;
|
||||
t->mode_mask = T_RADIO;
|
||||
t->mode = T_STANDBY;
|
||||
t->freq = 87.5 * 16; /* Sets freq to FM range */
|
||||
default_mode_mask &= ~T_RADIO;
|
||||
if (!no_autodetect) {
|
||||
if (addr == 0x60) {
|
||||
if (tea5767_autodetection(&t->i2c) != EINVAL) {
|
||||
t->type = TUNER_TEA5767;
|
||||
t->mode_mask = T_RADIO;
|
||||
t->mode = T_STANDBY;
|
||||
t->freq = 87.5 * 16; /* Sets freq to FM range */
|
||||
default_mode_mask &= ~T_RADIO;
|
||||
|
||||
i2c_attach_client (&t->i2c);
|
||||
set_type(&t->i2c,t->type, t->mode_mask);
|
||||
return 0;
|
||||
i2c_attach_client (&t->i2c);
|
||||
set_type(&t->i2c,t->type, t->mode_mask);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -631,7 +636,9 @@ static int tuner_command(struct i2c_client *client, unsigned int cmd, void *arg)
|
||||
break;
|
||||
}
|
||||
default:
|
||||
tuner_dbg("Unimplemented IOCTL 0x%08x called to tuner.\n", cmd);
|
||||
tuner_dbg("Unimplemented IOCTL 0x%08x(dir=%d,tp=0x%02x,nr=%d,sz=%d)\n",
|
||||
cmd, _IOC_DIR(cmd), _IOC_TYPE(cmd),
|
||||
_IOC_NR(cmd), _IOC_SIZE(cmd));
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -300,7 +300,7 @@ config MTD_JEDEC
|
||||
|
||||
config MTD_XIP
|
||||
bool "XIP aware MTD support"
|
||||
depends on !SMP && (MTD_CFI_INTELEXT || MTD_CFI_AMDSTD) && EXPERIMENTAL
|
||||
depends on !SMP && (MTD_CFI_INTELEXT || MTD_CFI_AMDSTD) && EXPERIMENTAL && ARM
|
||||
default y if XIP_KERNEL
|
||||
help
|
||||
This allows MTD support to work with flash memory which is also
|
||||
|
@ -4,7 +4,7 @@
|
||||
*
|
||||
* (C) 2000 Red Hat. GPL'd
|
||||
*
|
||||
* $Id: cfi_cmdset_0020.c,v 1.17 2004/11/20 12:49:04 dwmw2 Exp $
|
||||
* $Id: cfi_cmdset_0020.c,v 1.19 2005/07/13 15:52:45 dwmw2 Exp $
|
||||
*
|
||||
* 10/10/2000 Nicolas Pitre <nico@cam.org>
|
||||
* - completely revamped method functions so they are aware and
|
||||
@ -16,6 +16,8 @@
|
||||
* - modified Intel Command Set 0x0001 to support ST Advanced Architecture
|
||||
* (command set 0x0020)
|
||||
* - added a writev function
|
||||
* 07/13/2005 Joern Engel <joern@wh.fh-wedel.de>
|
||||
* - Plugged memory leak in cfi_staa_writev().
|
||||
*/
|
||||
|
||||
#include <linux/version.h>
|
||||
@ -719,6 +721,7 @@ cfi_staa_writev(struct mtd_info *mtd, const struct kvec *vecs,
|
||||
write_error:
|
||||
if (retlen)
|
||||
*retlen = totlen;
|
||||
kfree(buffer);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -59,7 +59,7 @@
|
||||
* The AG-AND chips have nice features for speed improvement,
|
||||
* which are not supported yet. Read / program 4 pages in one go.
|
||||
*
|
||||
* $Id: nand_base.c,v 1.146 2005/06/17 15:02:06 gleixner Exp $
|
||||
* $Id: nand_base.c,v 1.147 2005/07/15 07:18:06 gleixner Exp $
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
@ -1409,16 +1409,6 @@ static int nand_read_oob (struct mtd_info *mtd, loff_t from, size_t len, size_t
|
||||
thislen = min_t(int, thislen, len);
|
||||
this->read_buf(mtd, &buf[i], thislen);
|
||||
i += thislen;
|
||||
|
||||
/* Apply delay or wait for ready/busy pin
|
||||
* Do this before the AUTOINCR check, so no problems
|
||||
* arise if a chip which does auto increment
|
||||
* is marked as NOAUTOINCR by the board driver.
|
||||
*/
|
||||
if (!this->dev_ready)
|
||||
udelay (this->chip_delay);
|
||||
else
|
||||
nand_wait_ready(mtd);
|
||||
|
||||
/* Read more ? */
|
||||
if (i < len) {
|
||||
@ -1432,6 +1422,16 @@ static int nand_read_oob (struct mtd_info *mtd, loff_t from, size_t len, size_t
|
||||
this->select_chip(mtd, chipnr);
|
||||
}
|
||||
|
||||
/* Apply delay or wait for ready/busy pin
|
||||
* Do this before the AUTOINCR check, so no problems
|
||||
* arise if a chip which does auto increment
|
||||
* is marked as NOAUTOINCR by the board driver.
|
||||
*/
|
||||
if (!this->dev_ready)
|
||||
udelay (this->chip_delay);
|
||||
else
|
||||
nand_wait_ready(mtd);
|
||||
|
||||
/* Check, if the chip supports auto page increment
|
||||
* or if we have hit a block boundary.
|
||||
*/
|
||||
|
@ -6,7 +6,7 @@
|
||||
*
|
||||
* Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de)
|
||||
*
|
||||
* $Id: nand_bbt.c,v 1.33 2005/06/14 15:47:56 gleixner Exp $
|
||||
* $Id: nand_bbt.c,v 1.35 2005/07/15 13:53:47 gleixner Exp $
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
@ -109,24 +109,21 @@ static int check_pattern (uint8_t *buf, int len, int paglen, struct nand_bbt_des
|
||||
/**
|
||||
* check_short_pattern - [GENERIC] check if a pattern is in the buffer
|
||||
* @buf: the buffer to search
|
||||
* @len: the length of buffer to search
|
||||
* @paglen: the pagelength
|
||||
* @td: search pattern descriptor
|
||||
*
|
||||
* Check for a pattern at the given place. Used to search bad block
|
||||
* tables and good / bad block identifiers. Same as check_pattern, but
|
||||
* no optional empty check and the pattern is expected to start
|
||||
* at offset 0.
|
||||
* no optional empty check
|
||||
*
|
||||
*/
|
||||
static int check_short_pattern (uint8_t *buf, int len, int paglen, struct nand_bbt_descr *td)
|
||||
static int check_short_pattern (uint8_t *buf, struct nand_bbt_descr *td)
|
||||
{
|
||||
int i;
|
||||
uint8_t *p = buf;
|
||||
|
||||
/* Compare the pattern */
|
||||
for (i = 0; i < td->len; i++) {
|
||||
if (p[i] != td->pattern[i])
|
||||
if (p[td->offs + i] != td->pattern[i])
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
@ -337,13 +334,14 @@ static int create_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr
|
||||
if (!(bd->options & NAND_BBT_SCANEMPTY)) {
|
||||
size_t retlen;
|
||||
|
||||
/* No need to read pages fully, just read required OOB bytes */
|
||||
ret = mtd->read_oob(mtd, from + j * mtd->oobblock + bd->offs,
|
||||
readlen, &retlen, &buf[0]);
|
||||
/* Read the full oob until read_oob is fixed to
|
||||
* handle single byte reads for 16 bit buswidth */
|
||||
ret = mtd->read_oob(mtd, from + j * mtd->oobblock,
|
||||
mtd->oobsize, &retlen, buf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (check_short_pattern (&buf[j * scanlen], scanlen, mtd->oobblock, bd)) {
|
||||
if (check_short_pattern (buf, bd)) {
|
||||
this->bbt[i >> 3] |= 0x03 << (i & 0x6);
|
||||
printk (KERN_WARNING "Bad eraseblock %d at 0x%08x\n",
|
||||
i >> 1, (unsigned int) from);
|
||||
|
@ -426,8 +426,6 @@
|
||||
static char *serial_version = "$Revision: 1.25 $";
|
||||
|
||||
#include <linux/config.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/signal.h>
|
||||
|
@ -25,7 +25,6 @@
|
||||
#define SERIAL_DO_RESTART
|
||||
#include <linux/module.h>
|
||||
#include <linux/config.h>
|
||||
#include <linux/version.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/signal.h>
|
||||
|
@ -28,7 +28,6 @@
|
||||
#define __JSM_DRIVER_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/version.h>
|
||||
#include <linux/types.h> /* To pick up the varions Linux types */
|
||||
#include <linux/tty.h>
|
||||
#include <linux/serial_core.h>
|
||||
|
@ -12,14 +12,25 @@
|
||||
History:
|
||||
|
||||
2005-05-19 v0.1 Initial version, based on incomplete docs
|
||||
and analysis of misbehavior of the standard driver
|
||||
and analysis of misbehavior with the standard driver
|
||||
2005-05-20 v0.2 Extended the input buffer to avoid losing
|
||||
random 64-byte chunks of data
|
||||
2005-05-21 v0.3 implemented chars_in_buffer()
|
||||
turned on low_latency
|
||||
simplified the code somewhat
|
||||
2005-05-24 v0.4 option_write() sometimes deadlocked under heavy load
|
||||
removed some dead code
|
||||
added sponsor notice
|
||||
coding style clean-up
|
||||
2005-06-20 v0.4.1 add missing braces :-/
|
||||
killed end-of-line whitespace
|
||||
2005-07-15 v0.4.2 rename WLAN product to FUSION, add FUSION2
|
||||
|
||||
Work sponsored by: Sigos GmbH, Germany <info@sigos.de>
|
||||
|
||||
*/
|
||||
#define DRIVER_VERSION "v0.3"
|
||||
|
||||
#define DRIVER_VERSION "v0.4"
|
||||
#define DRIVER_AUTHOR "Matthias Urlichs <smurf@smurf.noris.de>"
|
||||
#define DRIVER_DESC "Option Card (PC-Card to) USB to Serial Driver"
|
||||
|
||||
@ -44,7 +55,6 @@ static int option_write_room (struct usb_serial_port *port);
|
||||
|
||||
static void option_instat_callback(struct urb *urb, struct pt_regs *regs);
|
||||
|
||||
|
||||
static int option_write (struct usb_serial_port *port,
|
||||
const unsigned char *buf, int count);
|
||||
|
||||
@ -60,14 +70,17 @@ static int option_tiocmset (struct usb_serial_port *port, struct file *file,
|
||||
static int option_send_setup (struct usb_serial_port *port);
|
||||
|
||||
/* Vendor and product IDs */
|
||||
#define OPTION_VENDOR_ID 0x0AF0
|
||||
#define OPTION_VENDOR_ID 0x0AF0
|
||||
|
||||
#define OPTION_PRODUCT_OLD 0x5000
|
||||
#define OPTION_PRODUCT_FUSION 0x6000
|
||||
#define OPTION_PRODUCT_FUSION2 0x6300
|
||||
|
||||
#define OPTION_PRODUCT_OLD 0x5000
|
||||
#define OPTION_PRODUCT_WLAN 0x6000
|
||||
|
||||
static struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_OLD) },
|
||||
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_WLAN) },
|
||||
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION) },
|
||||
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION2) },
|
||||
{ } /* Terminating entry */
|
||||
};
|
||||
|
||||
@ -85,58 +98,62 @@ static struct usb_driver option_driver = {
|
||||
* recognizes separately, thus num_port=1.
|
||||
*/
|
||||
static struct usb_serial_device_type option_3port_device = {
|
||||
.owner = THIS_MODULE,
|
||||
.name = "Option 3-port card",
|
||||
.short_name = "option",
|
||||
.id_table = option_ids,
|
||||
.num_interrupt_in = NUM_DONT_CARE,
|
||||
.num_bulk_in = NUM_DONT_CARE,
|
||||
.num_bulk_out = NUM_DONT_CARE,
|
||||
.num_ports = 1, /* 3 */
|
||||
.open = option_open,
|
||||
.close = option_close,
|
||||
.write = option_write,
|
||||
.write_room = option_write_room,
|
||||
.chars_in_buffer = option_chars_in_buffer,
|
||||
.throttle = option_rx_throttle,
|
||||
.unthrottle = option_rx_unthrottle,
|
||||
.ioctl = option_ioctl,
|
||||
.set_termios = option_set_termios,
|
||||
.break_ctl = option_break_ctl,
|
||||
.tiocmget = option_tiocmget,
|
||||
.tiocmset = option_tiocmset,
|
||||
.attach = option_startup,
|
||||
.shutdown = option_shutdown,
|
||||
.read_int_callback = option_instat_callback,
|
||||
.owner = THIS_MODULE,
|
||||
.name = "Option 3G data card",
|
||||
.short_name = "option",
|
||||
.id_table = option_ids,
|
||||
.num_interrupt_in = NUM_DONT_CARE,
|
||||
.num_bulk_in = NUM_DONT_CARE,
|
||||
.num_bulk_out = NUM_DONT_CARE,
|
||||
.num_ports = 1, /* 3, but the card reports its ports separately */
|
||||
.open = option_open,
|
||||
.close = option_close,
|
||||
.write = option_write,
|
||||
.write_room = option_write_room,
|
||||
.chars_in_buffer = option_chars_in_buffer,
|
||||
.throttle = option_rx_throttle,
|
||||
.unthrottle = option_rx_unthrottle,
|
||||
.ioctl = option_ioctl,
|
||||
.set_termios = option_set_termios,
|
||||
.break_ctl = option_break_ctl,
|
||||
.tiocmget = option_tiocmget,
|
||||
.tiocmset = option_tiocmset,
|
||||
.attach = option_startup,
|
||||
.shutdown = option_shutdown,
|
||||
.read_int_callback = option_instat_callback,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_USB_DEBUG
|
||||
static int debug;
|
||||
#else
|
||||
#define debug 0
|
||||
#endif
|
||||
|
||||
|
||||
/* per port private data */
|
||||
|
||||
#define N_IN_URB 4
|
||||
#define N_OUT_URB 1
|
||||
#define IN_BUFLEN 1024
|
||||
#define OUT_BUFLEN 1024
|
||||
#define N_IN_URB 4
|
||||
#define N_OUT_URB 1
|
||||
#define IN_BUFLEN 1024
|
||||
#define OUT_BUFLEN 128
|
||||
|
||||
struct option_port_private {
|
||||
/* Input endpoints and buffer for this port */
|
||||
struct urb *in_urbs[N_IN_URB];
|
||||
char in_buffer[N_IN_URB][IN_BUFLEN];
|
||||
struct urb *in_urbs[N_IN_URB];
|
||||
char in_buffer[N_IN_URB][IN_BUFLEN];
|
||||
/* Output endpoints and buffer for this port */
|
||||
struct urb *out_urbs[N_OUT_URB];
|
||||
char out_buffer[N_OUT_URB][OUT_BUFLEN];
|
||||
struct urb *out_urbs[N_OUT_URB];
|
||||
char out_buffer[N_OUT_URB][OUT_BUFLEN];
|
||||
|
||||
/* Settings for the port */
|
||||
int rts_state; /* Handshaking pins (outputs) */
|
||||
int dtr_state;
|
||||
int cts_state; /* Handshaking pins (inputs) */
|
||||
int dsr_state;
|
||||
int dcd_state;
|
||||
int ri_state;
|
||||
// int break_on;
|
||||
int rts_state; /* Handshaking pins (outputs) */
|
||||
int dtr_state;
|
||||
int cts_state; /* Handshaking pins (inputs) */
|
||||
int dsr_state;
|
||||
int dcd_state;
|
||||
int ri_state;
|
||||
|
||||
unsigned long tx_start_time[N_OUT_URB];
|
||||
unsigned long tx_start_time[N_OUT_URB];
|
||||
};
|
||||
|
||||
|
||||
@ -190,13 +207,13 @@ static void
|
||||
option_break_ctl (struct usb_serial_port *port, int break_state)
|
||||
{
|
||||
/* Unfortunately, I don't know how to send a break */
|
||||
dbg("%s", __FUNCTION__);
|
||||
dbg("%s", __FUNCTION__);
|
||||
}
|
||||
|
||||
|
||||
static void
|
||||
option_set_termios (struct usb_serial_port *port,
|
||||
struct termios *old_termios)
|
||||
struct termios *old_termios)
|
||||
{
|
||||
dbg("%s", __FUNCTION__);
|
||||
|
||||
@ -204,10 +221,10 @@ option_set_termios (struct usb_serial_port *port,
|
||||
}
|
||||
|
||||
static int
|
||||
option_tiocmget(struct usb_serial_port *port, struct file *file)
|
||||
option_tiocmget (struct usb_serial_port *port, struct file *file)
|
||||
{
|
||||
unsigned int value;
|
||||
struct option_port_private *portdata;
|
||||
unsigned int value;
|
||||
struct option_port_private *portdata;
|
||||
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
|
||||
@ -225,7 +242,7 @@ static int
|
||||
option_tiocmset (struct usb_serial_port *port, struct file *file,
|
||||
unsigned int set, unsigned int clear)
|
||||
{
|
||||
struct option_port_private *portdata;
|
||||
struct option_port_private *portdata;
|
||||
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
|
||||
@ -250,71 +267,50 @@ option_ioctl (struct usb_serial_port *port, struct file *file,
|
||||
|
||||
/* Write */
|
||||
static int
|
||||
option_write(struct usb_serial_port *port,
|
||||
const unsigned char *buf, int count)
|
||||
option_write (struct usb_serial_port *port,
|
||||
const unsigned char *buf, int count)
|
||||
{
|
||||
struct option_port_private *portdata;
|
||||
int i;
|
||||
int left, todo;
|
||||
struct urb *this_urb = NULL; /* spurious */
|
||||
int err;
|
||||
struct option_port_private *portdata;
|
||||
int i;
|
||||
int left, todo;
|
||||
struct urb *this_urb = NULL; /* spurious */
|
||||
int err;
|
||||
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
|
||||
dbg("%s: write (%d chars)", __FUNCTION__, count);
|
||||
|
||||
#if 0
|
||||
spin_lock(&port->lock);
|
||||
if (port->write_urb_busy) {
|
||||
spin_unlock(&port->lock);
|
||||
dbg("%s: already writing", __FUNCTION__);
|
||||
return 0;
|
||||
}
|
||||
port->write_urb_busy = 1;
|
||||
spin_unlock(&port->lock);
|
||||
#endif
|
||||
|
||||
i = 0;
|
||||
left = count;
|
||||
while (left>0) {
|
||||
for (i=0; left > 0 && i < N_OUT_URB; i++) {
|
||||
todo = left;
|
||||
if (todo > OUT_BUFLEN)
|
||||
todo = OUT_BUFLEN;
|
||||
|
||||
for (;i < N_OUT_URB; i++) {
|
||||
/* Check we have a valid urb/endpoint before we use it... */
|
||||
this_urb = portdata->out_urbs[i];
|
||||
if (this_urb->status != -EINPROGRESS)
|
||||
break;
|
||||
this_urb = portdata->out_urbs[i];
|
||||
if (this_urb->status == -EINPROGRESS) {
|
||||
if (this_urb->transfer_flags & URB_ASYNC_UNLINK)
|
||||
continue;
|
||||
if (time_before(jiffies, portdata->tx_start_time[i] + 10 * HZ))
|
||||
continue;
|
||||
this_urb->transfer_flags |= URB_ASYNC_UNLINK;
|
||||
usb_unlink_urb(this_urb);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (i == N_OUT_URB) {
|
||||
/* no bulk out free! */
|
||||
dbg("%s: no output urb -- left %d", __FUNCTION__,count-left);
|
||||
#if 0
|
||||
port->write_urb_busy = 0;
|
||||
#endif
|
||||
return count-left;
|
||||
}
|
||||
if (this_urb->status != 0)
|
||||
dbg("usb_write %p failed (err=%d)", this_urb, this_urb->status);
|
||||
|
||||
dbg("%s: endpoint %d buf %d", __FUNCTION__, usb_pipeendpoint(this_urb->pipe), i);
|
||||
|
||||
/* send the data */
|
||||
memcpy (this_urb->transfer_buffer, buf, todo);
|
||||
|
||||
/* send the data out the bulk port */
|
||||
this_urb->transfer_buffer_length = todo;
|
||||
|
||||
this_urb->transfer_flags &= ~URB_ASYNC_UNLINK;
|
||||
this_urb->dev = port->serial->dev;
|
||||
err = usb_submit_urb(this_urb, GFP_ATOMIC);
|
||||
if (err) {
|
||||
dbg("usb_submit_urb %p (write bulk) failed (%d,, has %d)", this_urb, err, this_urb->status);
|
||||
dbg("usb_submit_urb %p (write bulk) failed (%d, has %d)", this_urb, err, this_urb->status);
|
||||
continue;
|
||||
}
|
||||
portdata->tx_start_time[i] = jiffies;
|
||||
@ -323,9 +319,6 @@ option_write(struct usb_serial_port *port,
|
||||
}
|
||||
|
||||
count -= left;
|
||||
#if 0
|
||||
port->write_urb_busy = 0;
|
||||
#endif
|
||||
dbg("%s: wrote (did %d)", __FUNCTION__, count);
|
||||
return count;
|
||||
}
|
||||
@ -333,7 +326,7 @@ option_write(struct usb_serial_port *port,
|
||||
static void
|
||||
option_indat_callback (struct urb *urb, struct pt_regs *regs)
|
||||
{
|
||||
int i, err;
|
||||
int i, err;
|
||||
int endpoint;
|
||||
struct usb_serial_port *port;
|
||||
struct tty_struct *tty;
|
||||
@ -444,10 +437,11 @@ option_write_room (struct usb_serial_port *port)
|
||||
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
|
||||
for (i=0; i < N_OUT_URB; i++)
|
||||
for (i=0; i < N_OUT_URB; i++) {
|
||||
this_urb = portdata->out_urbs[i];
|
||||
if (this_urb && this_urb->status != -EINPROGRESS)
|
||||
data_len += OUT_BUFLEN;
|
||||
}
|
||||
|
||||
dbg("%s: %d", __FUNCTION__, data_len);
|
||||
return data_len;
|
||||
@ -464,11 +458,11 @@ option_chars_in_buffer (struct usb_serial_port *port)
|
||||
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
|
||||
for (i=0; i < N_OUT_URB; i++)
|
||||
for (i=0; i < N_OUT_URB; i++) {
|
||||
this_urb = portdata->out_urbs[i];
|
||||
if (this_urb && this_urb->status == -EINPROGRESS)
|
||||
data_len += this_urb->transfer_buffer_length;
|
||||
|
||||
}
|
||||
dbg("%s: %d", __FUNCTION__, data_len);
|
||||
return data_len;
|
||||
}
|
||||
@ -477,10 +471,10 @@ option_chars_in_buffer (struct usb_serial_port *port)
|
||||
static int
|
||||
option_open (struct usb_serial_port *port, struct file *filp)
|
||||
{
|
||||
struct option_port_private *portdata;
|
||||
struct usb_serial *serial = port->serial;
|
||||
int i, err;
|
||||
struct urb *urb;
|
||||
struct option_port_private *portdata;
|
||||
struct usb_serial *serial = port->serial;
|
||||
int i, err;
|
||||
struct urb *urb;
|
||||
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
|
||||
@ -528,7 +522,7 @@ option_open (struct usb_serial_port *port, struct file *filp)
|
||||
}
|
||||
|
||||
static inline void
|
||||
stop_urb(struct urb *urb)
|
||||
stop_urb (struct urb *urb)
|
||||
{
|
||||
if (urb && urb->status == -EINPROGRESS) {
|
||||
urb->transfer_flags &= ~URB_ASYNC_UNLINK;
|
||||
@ -537,11 +531,11 @@ stop_urb(struct urb *urb)
|
||||
}
|
||||
|
||||
static void
|
||||
option_close(struct usb_serial_port *port, struct file *filp)
|
||||
option_close (struct usb_serial_port *port, struct file *filp)
|
||||
{
|
||||
int i;
|
||||
struct usb_serial *serial = port->serial;
|
||||
struct option_port_private *portdata;
|
||||
int i;
|
||||
struct usb_serial *serial = port->serial;
|
||||
struct option_port_private *portdata;
|
||||
|
||||
dbg("%s", __FUNCTION__);
|
||||
portdata = usb_get_serial_port_data(port);
|
||||
@ -589,11 +583,11 @@ option_setup_urb (struct usb_serial *serial, int endpoint,
|
||||
|
||||
/* Setup urbs */
|
||||
static void
|
||||
option_setup_urbs(struct usb_serial *serial)
|
||||
option_setup_urbs (struct usb_serial *serial)
|
||||
{
|
||||
int j;
|
||||
struct usb_serial_port *port;
|
||||
struct option_port_private *portdata;
|
||||
int j;
|
||||
struct usb_serial_port *port;
|
||||
struct option_port_private *portdata;
|
||||
|
||||
dbg("%s", __FUNCTION__);
|
||||
|
||||
@ -617,7 +611,7 @@ option_setup_urbs(struct usb_serial *serial)
|
||||
|
||||
|
||||
static int
|
||||
option_send_setup(struct usb_serial_port *port)
|
||||
option_send_setup (struct usb_serial_port *port)
|
||||
{
|
||||
struct usb_serial *serial = port->serial;
|
||||
struct option_port_private *portdata;
|
||||
@ -644,9 +638,9 @@ option_send_setup(struct usb_serial_port *port)
|
||||
static int
|
||||
option_startup (struct usb_serial *serial)
|
||||
{
|
||||
int i, err;
|
||||
struct usb_serial_port *port;
|
||||
struct option_port_private *portdata;
|
||||
int i, err;
|
||||
struct usb_serial_port *port;
|
||||
struct option_port_private *portdata;
|
||||
|
||||
dbg("%s", __FUNCTION__);
|
||||
|
||||
@ -677,9 +671,9 @@ option_startup (struct usb_serial *serial)
|
||||
static void
|
||||
option_shutdown (struct usb_serial *serial)
|
||||
{
|
||||
int i, j;
|
||||
struct usb_serial_port *port;
|
||||
struct option_port_private *portdata;
|
||||
int i, j;
|
||||
struct usb_serial_port *port;
|
||||
struct option_port_private *portdata;
|
||||
|
||||
dbg("%s", __FUNCTION__);
|
||||
|
||||
@ -724,6 +718,8 @@ MODULE_DESCRIPTION(DRIVER_DESC);
|
||||
MODULE_VERSION(DRIVER_VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
#ifdef CONFIG_USB_DEBUG
|
||||
module_param(debug, bool, S_IRUGO | S_IWUSR);
|
||||
MODULE_PARM_DESC(debug, "Debug messages");
|
||||
#endif
|
||||
|
||||
|
@ -15,66 +15,79 @@
|
||||
#include "xip.h"
|
||||
|
||||
static inline int
|
||||
__inode_direct_access(struct inode *inode, sector_t sector, unsigned long *data) {
|
||||
__inode_direct_access(struct inode *inode, sector_t sector,
|
||||
unsigned long *data)
|
||||
{
|
||||
BUG_ON(!inode->i_sb->s_bdev->bd_disk->fops->direct_access);
|
||||
return inode->i_sb->s_bdev->bd_disk->fops
|
||||
->direct_access(inode->i_sb->s_bdev,sector,data);
|
||||
}
|
||||
|
||||
static inline int
|
||||
__ext2_get_sector(struct inode *inode, sector_t offset, int create,
|
||||
sector_t *result)
|
||||
{
|
||||
struct buffer_head tmp;
|
||||
int rc;
|
||||
|
||||
memset(&tmp, 0, sizeof(struct buffer_head));
|
||||
rc = ext2_get_block(inode, offset/ (PAGE_SIZE/512), &tmp,
|
||||
create);
|
||||
*result = tmp.b_blocknr;
|
||||
|
||||
/* did we get a sparse block (hole in the file)? */
|
||||
if (!(*result)) {
|
||||
BUG_ON(create);
|
||||
rc = -ENODATA;
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
int
|
||||
ext2_clear_xip_target(struct inode *inode, int block) {
|
||||
sector_t sector = block*(PAGE_SIZE/512);
|
||||
ext2_clear_xip_target(struct inode *inode, int block)
|
||||
{
|
||||
sector_t sector = block * (PAGE_SIZE/512);
|
||||
unsigned long data;
|
||||
int rc;
|
||||
|
||||
rc = __inode_direct_access(inode, sector, &data);
|
||||
if (rc)
|
||||
return rc;
|
||||
clear_page((void*)data);
|
||||
return 0;
|
||||
if (!rc)
|
||||
clear_page((void*)data);
|
||||
return rc;
|
||||
}
|
||||
|
||||
void ext2_xip_verify_sb(struct super_block *sb)
|
||||
{
|
||||
struct ext2_sb_info *sbi = EXT2_SB(sb);
|
||||
|
||||
if ((sbi->s_mount_opt & EXT2_MOUNT_XIP)) {
|
||||
if ((sb->s_bdev == NULL) ||
|
||||
sb->s_bdev->bd_disk == NULL ||
|
||||
sb->s_bdev->bd_disk->fops == NULL ||
|
||||
sb->s_bdev->bd_disk->fops->direct_access == NULL) {
|
||||
sbi->s_mount_opt &= (~EXT2_MOUNT_XIP);
|
||||
ext2_warning(sb, __FUNCTION__,
|
||||
"ignoring xip option - not supported by bdev");
|
||||
}
|
||||
if ((sbi->s_mount_opt & EXT2_MOUNT_XIP) &&
|
||||
!sb->s_bdev->bd_disk->fops->direct_access) {
|
||||
sbi->s_mount_opt &= (~EXT2_MOUNT_XIP);
|
||||
ext2_warning(sb, __FUNCTION__,
|
||||
"ignoring xip option - not supported by bdev");
|
||||
}
|
||||
}
|
||||
|
||||
struct page*
|
||||
ext2_get_xip_page(struct address_space *mapping, sector_t blockno,
|
||||
struct page *
|
||||
ext2_get_xip_page(struct address_space *mapping, sector_t offset,
|
||||
int create)
|
||||
{
|
||||
int rc;
|
||||
unsigned long data;
|
||||
struct buffer_head tmp;
|
||||
sector_t sector;
|
||||
|
||||
tmp.b_state = 0;
|
||||
tmp.b_blocknr = 0;
|
||||
rc = ext2_get_block(mapping->host, blockno/(PAGE_SIZE/512) , &tmp,
|
||||
create);
|
||||
/* first, retrieve the sector number */
|
||||
rc = __ext2_get_sector(mapping->host, offset, create, §or);
|
||||
if (rc)
|
||||
return ERR_PTR(rc);
|
||||
if (tmp.b_blocknr == 0) {
|
||||
/* SPARSE block */
|
||||
BUG_ON(create);
|
||||
return ERR_PTR(-ENODATA);
|
||||
}
|
||||
goto error;
|
||||
|
||||
/* retrieve address of the target data */
|
||||
rc = __inode_direct_access
|
||||
(mapping->host,tmp.b_blocknr*(PAGE_SIZE/512) ,&data);
|
||||
if (rc)
|
||||
return ERR_PTR(rc);
|
||||
(mapping->host, sector * (PAGE_SIZE/512), &data);
|
||||
if (!rc)
|
||||
return virt_to_page(data);
|
||||
|
||||
SetPageUptodate(virt_to_page(data));
|
||||
return virt_to_page(data);
|
||||
error:
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
|
@ -15,7 +15,6 @@
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/root_dev.h>
|
||||
#include <linux/statfs.h>
|
||||
#include <linux/kdev_t.h>
|
||||
#include <asm/uaccess.h>
|
||||
@ -160,8 +159,6 @@ static int read_name(struct inode *ino, char *name)
|
||||
ino->i_size = i_size;
|
||||
ino->i_blksize = i_blksize;
|
||||
ino->i_blocks = i_blocks;
|
||||
if((ino->i_sb->s_dev == ROOT_DEV) && (ino->i_uid == getuid()))
|
||||
ino->i_uid = 0;
|
||||
return(0);
|
||||
}
|
||||
|
||||
@ -841,16 +838,10 @@ int hostfs_setattr(struct dentry *dentry, struct iattr *attr)
|
||||
attrs.ia_mode = attr->ia_mode;
|
||||
}
|
||||
if(attr->ia_valid & ATTR_UID){
|
||||
if((dentry->d_inode->i_sb->s_dev == ROOT_DEV) &&
|
||||
(attr->ia_uid == 0))
|
||||
attr->ia_uid = getuid();
|
||||
attrs.ia_valid |= HOSTFS_ATTR_UID;
|
||||
attrs.ia_uid = attr->ia_uid;
|
||||
}
|
||||
if(attr->ia_valid & ATTR_GID){
|
||||
if((dentry->d_inode->i_sb->s_dev == ROOT_DEV) &&
|
||||
(attr->ia_gid == 0))
|
||||
attr->ia_gid = getgid();
|
||||
attrs.ia_valid |= HOSTFS_ATTR_GID;
|
||||
attrs.ia_gid = attr->ia_gid;
|
||||
}
|
||||
|
@ -233,7 +233,7 @@ static ssize_t read_proc(struct file *file, char *buf, ssize_t count,
|
||||
set_fs(USER_DS);
|
||||
|
||||
if(ppos) *ppos = file->f_pos;
|
||||
return(n);
|
||||
return n;
|
||||
}
|
||||
|
||||
static ssize_t hppfs_read_file(int fd, char *buf, ssize_t count)
|
||||
@ -254,7 +254,7 @@ static ssize_t hppfs_read_file(int fd, char *buf, ssize_t count)
|
||||
err = os_read_file(fd, new_buf, cur);
|
||||
if(err < 0){
|
||||
printk("hppfs_read : read failed, errno = %d\n",
|
||||
count);
|
||||
err);
|
||||
n = err;
|
||||
goto out_free;
|
||||
}
|
||||
@ -271,7 +271,7 @@ static ssize_t hppfs_read_file(int fd, char *buf, ssize_t count)
|
||||
out_free:
|
||||
kfree(new_buf);
|
||||
out:
|
||||
return(n);
|
||||
return n;
|
||||
}
|
||||
|
||||
static ssize_t hppfs_read(struct file *file, char *buf, size_t count,
|
||||
|
45
fs/inode.c
45
fs/inode.c
@ -757,6 +757,7 @@ EXPORT_SYMBOL(igrab);
|
||||
* @head: the head of the list to search
|
||||
* @test: callback used for comparisons between inodes
|
||||
* @data: opaque data pointer to pass to @test
|
||||
* @wait: if true wait for the inode to be unlocked, if false do not
|
||||
*
|
||||
* ifind() searches for the inode specified by @data in the inode
|
||||
* cache. This is a generalized version of ifind_fast() for file systems where
|
||||
@ -771,7 +772,7 @@ EXPORT_SYMBOL(igrab);
|
||||
*/
|
||||
static inline struct inode *ifind(struct super_block *sb,
|
||||
struct hlist_head *head, int (*test)(struct inode *, void *),
|
||||
void *data)
|
||||
void *data, const int wait)
|
||||
{
|
||||
struct inode *inode;
|
||||
|
||||
@ -780,7 +781,8 @@ static inline struct inode *ifind(struct super_block *sb,
|
||||
if (inode) {
|
||||
__iget(inode);
|
||||
spin_unlock(&inode_lock);
|
||||
wait_on_inode(inode);
|
||||
if (likely(wait))
|
||||
wait_on_inode(inode);
|
||||
return inode;
|
||||
}
|
||||
spin_unlock(&inode_lock);
|
||||
@ -820,7 +822,7 @@ static inline struct inode *ifind_fast(struct super_block *sb,
|
||||
}
|
||||
|
||||
/**
|
||||
* ilookup5 - search for an inode in the inode cache
|
||||
* ilookup5_nowait - search for an inode in the inode cache
|
||||
* @sb: super block of file system to search
|
||||
* @hashval: hash value (usually inode number) to search for
|
||||
* @test: callback used for comparisons between inodes
|
||||
@ -832,7 +834,38 @@ static inline struct inode *ifind_fast(struct super_block *sb,
|
||||
* identification of an inode.
|
||||
*
|
||||
* If the inode is in the cache, the inode is returned with an incremented
|
||||
* reference count.
|
||||
* reference count. Note, the inode lock is not waited upon so you have to be
|
||||
* very careful what you do with the returned inode. You probably should be
|
||||
* using ilookup5() instead.
|
||||
*
|
||||
* Otherwise NULL is returned.
|
||||
*
|
||||
* Note, @test is called with the inode_lock held, so can't sleep.
|
||||
*/
|
||||
struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval,
|
||||
int (*test)(struct inode *, void *), void *data)
|
||||
{
|
||||
struct hlist_head *head = inode_hashtable + hash(sb, hashval);
|
||||
|
||||
return ifind(sb, head, test, data, 0);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(ilookup5_nowait);
|
||||
|
||||
/**
|
||||
* ilookup5 - search for an inode in the inode cache
|
||||
* @sb: super block of file system to search
|
||||
* @hashval: hash value (usually inode number) to search for
|
||||
* @test: callback used for comparisons between inodes
|
||||
* @data: opaque data pointer to pass to @test
|
||||
*
|
||||
* ilookup5() uses ifind() to search for the inode specified by @hashval and
|
||||
* @data in the inode cache. This is a generalized version of ilookup() for
|
||||
* file systems where the inode number is not sufficient for unique
|
||||
* identification of an inode.
|
||||
*
|
||||
* If the inode is in the cache, the inode lock is waited upon and the inode is
|
||||
* returned with an incremented reference count.
|
||||
*
|
||||
* Otherwise NULL is returned.
|
||||
*
|
||||
@ -843,7 +876,7 @@ struct inode *ilookup5(struct super_block *sb, unsigned long hashval,
|
||||
{
|
||||
struct hlist_head *head = inode_hashtable + hash(sb, hashval);
|
||||
|
||||
return ifind(sb, head, test, data);
|
||||
return ifind(sb, head, test, data, 1);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(ilookup5);
|
||||
@ -900,7 +933,7 @@ struct inode *iget5_locked(struct super_block *sb, unsigned long hashval,
|
||||
struct hlist_head *head = inode_hashtable + hash(sb, hashval);
|
||||
struct inode *inode;
|
||||
|
||||
inode = ifind(sb, head, test, data);
|
||||
inode = ifind(sb, head, test, data, 1);
|
||||
if (inode)
|
||||
return inode;
|
||||
/*
|
||||
|
58
fs/inotify.c
58
fs/inotify.c
@ -29,8 +29,6 @@
|
||||
#include <linux/mount.h>
|
||||
#include <linux/namei.h>
|
||||
#include <linux/poll.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/writeback.h>
|
||||
@ -45,8 +43,8 @@ static kmem_cache_t *event_cachep;
|
||||
|
||||
static struct vfsmount *inotify_mnt;
|
||||
|
||||
/* These are configurable via /proc/sys/inotify */
|
||||
int inotify_max_user_devices;
|
||||
/* these are configurable via /proc/sys/fs/inotify/ */
|
||||
int inotify_max_user_instances;
|
||||
int inotify_max_user_watches;
|
||||
int inotify_max_queued_events;
|
||||
|
||||
@ -125,6 +123,47 @@ struct inotify_watch {
|
||||
u32 mask; /* event mask for this watch */
|
||||
};
|
||||
|
||||
#ifdef CONFIG_SYSCTL
|
||||
|
||||
#include <linux/sysctl.h>
|
||||
|
||||
static int zero;
|
||||
|
||||
ctl_table inotify_table[] = {
|
||||
{
|
||||
.ctl_name = INOTIFY_MAX_USER_INSTANCES,
|
||||
.procname = "max_user_instances",
|
||||
.data = &inotify_max_user_instances,
|
||||
.maxlen = sizeof(int),
|
||||
.mode = 0644,
|
||||
.proc_handler = &proc_dointvec_minmax,
|
||||
.strategy = &sysctl_intvec,
|
||||
.extra1 = &zero,
|
||||
},
|
||||
{
|
||||
.ctl_name = INOTIFY_MAX_USER_WATCHES,
|
||||
.procname = "max_user_watches",
|
||||
.data = &inotify_max_user_watches,
|
||||
.maxlen = sizeof(int),
|
||||
.mode = 0644,
|
||||
.proc_handler = &proc_dointvec_minmax,
|
||||
.strategy = &sysctl_intvec,
|
||||
.extra1 = &zero,
|
||||
},
|
||||
{
|
||||
.ctl_name = INOTIFY_MAX_QUEUED_EVENTS,
|
||||
.procname = "max_queued_events",
|
||||
.data = &inotify_max_queued_events,
|
||||
.maxlen = sizeof(int),
|
||||
.mode = 0644,
|
||||
.proc_handler = &proc_dointvec_minmax,
|
||||
.strategy = &sysctl_intvec,
|
||||
.extra1 = &zero
|
||||
},
|
||||
{ .ctl_name = 0 }
|
||||
};
|
||||
#endif /* CONFIG_SYSCTL */
|
||||
|
||||
static inline void get_inotify_dev(struct inotify_device *dev)
|
||||
{
|
||||
atomic_inc(&dev->count);
|
||||
@ -842,7 +881,7 @@ asmlinkage long sys_inotify_init(void)
|
||||
|
||||
user = get_uid(current->user);
|
||||
|
||||
if (unlikely(atomic_read(&user->inotify_devs) >= inotify_max_user_devices)) {
|
||||
if (unlikely(atomic_read(&user->inotify_devs) >= inotify_max_user_instances)) {
|
||||
ret = -EMFILE;
|
||||
goto out_err;
|
||||
}
|
||||
@ -893,7 +932,7 @@ asmlinkage long sys_inotify_add_watch(int fd, const char *path, u32 mask)
|
||||
|
||||
dev = filp->private_data;
|
||||
|
||||
ret = find_inode ((const char __user*)path, &nd);
|
||||
ret = find_inode((const char __user*) path, &nd);
|
||||
if (ret)
|
||||
goto fput_and_out;
|
||||
|
||||
@ -950,8 +989,9 @@ asmlinkage long sys_inotify_rm_watch(int fd, u32 wd)
|
||||
if (!filp)
|
||||
return -EBADF;
|
||||
dev = filp->private_data;
|
||||
ret = inotify_ignore (dev, wd);
|
||||
ret = inotify_ignore(dev, wd);
|
||||
fput(filp);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -979,7 +1019,7 @@ static int __init inotify_init(void)
|
||||
inotify_mnt = kern_mount(&inotify_fs_type);
|
||||
|
||||
inotify_max_queued_events = 8192;
|
||||
inotify_max_user_devices = 128;
|
||||
inotify_max_user_instances = 8;
|
||||
inotify_max_user_watches = 8192;
|
||||
|
||||
atomic_set(&inotify_cookie, 0);
|
||||
@ -991,8 +1031,6 @@ static int __init inotify_init(void)
|
||||
sizeof(struct inotify_kernel_event),
|
||||
0, SLAB_PANIC, NULL, NULL);
|
||||
|
||||
printk(KERN_INFO "inotify syscall\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -7,7 +7,7 @@
|
||||
*
|
||||
* For licensing information, see the file 'LICENCE' in this directory.
|
||||
*
|
||||
* $Id: build.c,v 1.70 2005/02/28 08:21:05 dedekind Exp $
|
||||
* $Id: build.c,v 1.71 2005/07/12 16:37:08 dedekind Exp $
|
||||
*
|
||||
*/
|
||||
|
||||
@ -336,13 +336,6 @@ int jffs2_do_mount_fs(struct jffs2_sb_info *c)
|
||||
c->blocks[i].bad_count = 0;
|
||||
}
|
||||
|
||||
init_MUTEX(&c->alloc_sem);
|
||||
init_MUTEX(&c->erase_free_sem);
|
||||
init_waitqueue_head(&c->erase_wait);
|
||||
init_waitqueue_head(&c->inocache_wq);
|
||||
spin_lock_init(&c->erase_completion_lock);
|
||||
spin_lock_init(&c->inocache_lock);
|
||||
|
||||
INIT_LIST_HEAD(&c->clean_list);
|
||||
INIT_LIST_HEAD(&c->very_dirty_list);
|
||||
INIT_LIST_HEAD(&c->dirty_list);
|
||||
|
182
fs/jffs2/erase.c
182
fs/jffs2/erase.c
@ -7,7 +7,7 @@
|
||||
*
|
||||
* For licensing information, see the file 'LICENCE' in this directory.
|
||||
*
|
||||
* $Id: erase.c,v 1.76 2005/05/03 15:11:40 dedekind Exp $
|
||||
* $Id: erase.c,v 1.80 2005/07/14 19:46:24 joern Exp $
|
||||
*
|
||||
*/
|
||||
|
||||
@ -300,100 +300,86 @@ static void jffs2_free_all_node_refs(struct jffs2_sb_info *c, struct jffs2_erase
|
||||
jeb->last_node = NULL;
|
||||
}
|
||||
|
||||
static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, uint32_t *bad_offset)
|
||||
{
|
||||
void *ebuf;
|
||||
uint32_t ofs;
|
||||
size_t retlen;
|
||||
int ret = -EIO;
|
||||
|
||||
ebuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
if (!ebuf) {
|
||||
printk(KERN_WARNING "Failed to allocate page buffer for verifying erase at 0x%08x. Refiling\n", jeb->offset);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
D1(printk(KERN_DEBUG "Verifying erase at 0x%08x\n", jeb->offset));
|
||||
|
||||
for (ofs = jeb->offset; ofs < jeb->offset + c->sector_size; ) {
|
||||
uint32_t readlen = min((uint32_t)PAGE_SIZE, jeb->offset + c->sector_size - ofs);
|
||||
int i;
|
||||
|
||||
*bad_offset = ofs;
|
||||
|
||||
ret = jffs2_flash_read(c, ofs, readlen, &retlen, ebuf);
|
||||
if (ret) {
|
||||
printk(KERN_WARNING "Read of newly-erased block at 0x%08x failed: %d. Putting on bad_list\n", ofs, ret);
|
||||
goto fail;
|
||||
}
|
||||
if (retlen != readlen) {
|
||||
printk(KERN_WARNING "Short read from newly-erased block at 0x%08x. Wanted %d, got %zd\n", ofs, readlen, retlen);
|
||||
goto fail;
|
||||
}
|
||||
for (i=0; i<readlen; i += sizeof(unsigned long)) {
|
||||
/* It's OK. We know it's properly aligned */
|
||||
unsigned long *datum = ebuf + i;
|
||||
if (*datum + 1) {
|
||||
*bad_offset += i;
|
||||
printk(KERN_WARNING "Newly-erased block contained word 0x%lx at offset 0x%08x\n", *datum, *bad_offset);
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
ofs += readlen;
|
||||
cond_resched();
|
||||
}
|
||||
ret = 0;
|
||||
fail:
|
||||
kfree(ebuf);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void jffs2_mark_erased_block(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb)
|
||||
{
|
||||
struct jffs2_raw_node_ref *marker_ref = NULL;
|
||||
unsigned char *ebuf;
|
||||
size_t retlen;
|
||||
int ret;
|
||||
uint32_t bad_offset;
|
||||
|
||||
if ((!jffs2_cleanmarker_oob(c)) && (c->cleanmarker_size > 0)) {
|
||||
marker_ref = jffs2_alloc_raw_node_ref();
|
||||
if (!marker_ref) {
|
||||
printk(KERN_WARNING "Failed to allocate raw node ref for clean marker\n");
|
||||
/* Stick it back on the list from whence it came and come back later */
|
||||
jffs2_erase_pending_trigger(c);
|
||||
spin_lock(&c->erase_completion_lock);
|
||||
list_add(&jeb->list, &c->erase_complete_list);
|
||||
spin_unlock(&c->erase_completion_lock);
|
||||
return;
|
||||
}
|
||||
switch (jffs2_block_check_erase(c, jeb, &bad_offset)) {
|
||||
case -EAGAIN: goto refile;
|
||||
case -EIO: goto filebad;
|
||||
}
|
||||
ebuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
if (!ebuf) {
|
||||
printk(KERN_WARNING "Failed to allocate page buffer for verifying erase at 0x%08x. Assuming it worked\n", jeb->offset);
|
||||
} else {
|
||||
uint32_t ofs = jeb->offset;
|
||||
|
||||
D1(printk(KERN_DEBUG "Verifying erase at 0x%08x\n", jeb->offset));
|
||||
while(ofs < jeb->offset + c->sector_size) {
|
||||
uint32_t readlen = min((uint32_t)PAGE_SIZE, jeb->offset + c->sector_size - ofs);
|
||||
int i;
|
||||
|
||||
bad_offset = ofs;
|
||||
|
||||
ret = c->mtd->read(c->mtd, ofs, readlen, &retlen, ebuf);
|
||||
|
||||
if (ret) {
|
||||
printk(KERN_WARNING "Read of newly-erased block at 0x%08x failed: %d. Putting on bad_list\n", ofs, ret);
|
||||
goto bad;
|
||||
}
|
||||
if (retlen != readlen) {
|
||||
printk(KERN_WARNING "Short read from newly-erased block at 0x%08x. Wanted %d, got %zd\n", ofs, readlen, retlen);
|
||||
goto bad;
|
||||
}
|
||||
for (i=0; i<readlen; i += sizeof(unsigned long)) {
|
||||
/* It's OK. We know it's properly aligned */
|
||||
unsigned long datum = *(unsigned long *)(&ebuf[i]);
|
||||
if (datum + 1) {
|
||||
bad_offset += i;
|
||||
printk(KERN_WARNING "Newly-erased block contained word 0x%lx at offset 0x%08x\n", datum, bad_offset);
|
||||
bad:
|
||||
if ((!jffs2_cleanmarker_oob(c)) && (c->cleanmarker_size > 0))
|
||||
jffs2_free_raw_node_ref(marker_ref);
|
||||
kfree(ebuf);
|
||||
bad2:
|
||||
spin_lock(&c->erase_completion_lock);
|
||||
/* Stick it on a list (any list) so
|
||||
erase_failed can take it right off
|
||||
again. Silly, but shouldn't happen
|
||||
often. */
|
||||
list_add(&jeb->list, &c->erasing_list);
|
||||
spin_unlock(&c->erase_completion_lock);
|
||||
jffs2_erase_failed(c, jeb, bad_offset);
|
||||
return;
|
||||
}
|
||||
}
|
||||
ofs += readlen;
|
||||
cond_resched();
|
||||
}
|
||||
kfree(ebuf);
|
||||
}
|
||||
|
||||
bad_offset = jeb->offset;
|
||||
|
||||
/* Write the erase complete marker */
|
||||
D1(printk(KERN_DEBUG "Writing erased marker to block at 0x%08x\n", jeb->offset));
|
||||
if (jffs2_cleanmarker_oob(c)) {
|
||||
bad_offset = jeb->offset;
|
||||
|
||||
/* Cleanmarker in oob area or no cleanmarker at all ? */
|
||||
if (jffs2_cleanmarker_oob(c) || c->cleanmarker_size == 0) {
|
||||
|
||||
if (jffs2_cleanmarker_oob(c)) {
|
||||
if (jffs2_write_nand_cleanmarker(c, jeb))
|
||||
goto filebad;
|
||||
}
|
||||
|
||||
if (jffs2_write_nand_cleanmarker(c, jeb))
|
||||
goto bad2;
|
||||
|
||||
jeb->first_node = jeb->last_node = NULL;
|
||||
|
||||
jeb->free_size = c->sector_size;
|
||||
jeb->used_size = 0;
|
||||
jeb->dirty_size = 0;
|
||||
jeb->wasted_size = 0;
|
||||
} else if (c->cleanmarker_size == 0) {
|
||||
jeb->first_node = jeb->last_node = NULL;
|
||||
|
||||
jeb->free_size = c->sector_size;
|
||||
jeb->used_size = 0;
|
||||
jeb->dirty_size = 0;
|
||||
jeb->wasted_size = 0;
|
||||
} else {
|
||||
|
||||
struct kvec vecs[1];
|
||||
struct jffs2_unknown_node marker = {
|
||||
.magic = cpu_to_je16(JFFS2_MAGIC_BITMASK),
|
||||
@ -401,21 +387,28 @@ static void jffs2_mark_erased_block(struct jffs2_sb_info *c, struct jffs2_eraseb
|
||||
.totlen = cpu_to_je32(c->cleanmarker_size)
|
||||
};
|
||||
|
||||
marker_ref = jffs2_alloc_raw_node_ref();
|
||||
if (!marker_ref) {
|
||||
printk(KERN_WARNING "Failed to allocate raw node ref for clean marker. Refiling\n");
|
||||
goto refile;
|
||||
}
|
||||
|
||||
marker.hdr_crc = cpu_to_je32(crc32(0, &marker, sizeof(struct jffs2_unknown_node)-4));
|
||||
|
||||
vecs[0].iov_base = (unsigned char *) ▮
|
||||
vecs[0].iov_len = sizeof(marker);
|
||||
ret = jffs2_flash_direct_writev(c, vecs, 1, jeb->offset, &retlen);
|
||||
|
||||
if (ret) {
|
||||
printk(KERN_WARNING "Write clean marker to block at 0x%08x failed: %d\n",
|
||||
jeb->offset, ret);
|
||||
goto bad2;
|
||||
}
|
||||
if (retlen != sizeof(marker)) {
|
||||
printk(KERN_WARNING "Short write to newly-erased block at 0x%08x: Wanted %zd, got %zd\n",
|
||||
jeb->offset, sizeof(marker), retlen);
|
||||
goto bad2;
|
||||
if (ret || retlen != sizeof(marker)) {
|
||||
if (ret)
|
||||
printk(KERN_WARNING "Write clean marker to block at 0x%08x failed: %d\n",
|
||||
jeb->offset, ret);
|
||||
else
|
||||
printk(KERN_WARNING "Short write to newly-erased block at 0x%08x: Wanted %zd, got %zd\n",
|
||||
jeb->offset, sizeof(marker), retlen);
|
||||
|
||||
jffs2_free_raw_node_ref(marker_ref);
|
||||
goto filebad;
|
||||
}
|
||||
|
||||
marker_ref->next_in_ino = NULL;
|
||||
@ -444,5 +437,22 @@ static void jffs2_mark_erased_block(struct jffs2_sb_info *c, struct jffs2_eraseb
|
||||
c->nr_free_blocks++;
|
||||
spin_unlock(&c->erase_completion_lock);
|
||||
wake_up(&c->erase_wait);
|
||||
}
|
||||
return;
|
||||
|
||||
filebad:
|
||||
spin_lock(&c->erase_completion_lock);
|
||||
/* Stick it on a list (any list) so erase_failed can take it
|
||||
right off again. Silly, but shouldn't happen often. */
|
||||
list_add(&jeb->list, &c->erasing_list);
|
||||
spin_unlock(&c->erase_completion_lock);
|
||||
jffs2_erase_failed(c, jeb, bad_offset);
|
||||
return;
|
||||
|
||||
refile:
|
||||
/* Stick it back on the list from whence it came and come back later */
|
||||
jffs2_erase_pending_trigger(c);
|
||||
spin_lock(&c->erase_completion_lock);
|
||||
list_add(&jeb->list, &c->erase_complete_list);
|
||||
spin_unlock(&c->erase_completion_lock);
|
||||
return;
|
||||
}
|
||||
|
@ -7,7 +7,7 @@
|
||||
*
|
||||
* For licensing information, see the file 'LICENCE' in this directory.
|
||||
*
|
||||
* $Id: nodelist.c,v 1.97 2005/07/06 15:18:41 dwmw2 Exp $
|
||||
* $Id: nodelist.c,v 1.98 2005/07/10 15:15:32 dedekind Exp $
|
||||
*
|
||||
*/
|
||||
|
||||
@ -55,11 +55,11 @@ void jffs2_add_fd_to_list(struct jffs2_sb_info *c, struct jffs2_full_dirent *new
|
||||
});
|
||||
}
|
||||
|
||||
/* Put a new tmp_dnode_info into the list, keeping the list in
|
||||
order of increasing version
|
||||
*/
|
||||
|
||||
static void jffs2_add_tn_to_list(struct jffs2_tmp_dnode_info *tn, struct rb_root *list)
|
||||
/*
|
||||
* Put a new tmp_dnode_info into the temporaty RB-tree, keeping the list in
|
||||
* order of increasing version.
|
||||
*/
|
||||
static void jffs2_add_tn_to_tree(struct jffs2_tmp_dnode_info *tn, struct rb_root *list)
|
||||
{
|
||||
struct rb_node **p = &list->rb_node;
|
||||
struct rb_node * parent = NULL;
|
||||
@ -420,7 +420,7 @@ int jffs2_get_inode_nodes(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
|
||||
D1(printk(KERN_DEBUG "dnode @%08x: ver %u, offset %04x, dsize %04x\n",
|
||||
ref_offset(ref), je32_to_cpu(node.i.version),
|
||||
je32_to_cpu(node.i.offset), je32_to_cpu(node.i.dsize)));
|
||||
jffs2_add_tn_to_list(tn, &ret_tn);
|
||||
jffs2_add_tn_to_tree(tn, &ret_tn);
|
||||
break;
|
||||
|
||||
default:
|
||||
|
@ -7,7 +7,7 @@
|
||||
*
|
||||
* For licensing information, see the file 'LICENCE' in this directory.
|
||||
*
|
||||
* $Id: os-linux.h,v 1.57 2005/07/06 12:13:09 dwmw2 Exp $
|
||||
* $Id: os-linux.h,v 1.58 2005/07/12 02:34:35 tpoynor Exp $
|
||||
*
|
||||
*/
|
||||
|
||||
@ -86,6 +86,8 @@ static inline void jffs2_init_inode_info(struct jffs2_inode_info *f)
|
||||
#define jffs2_dataflash(c) (0)
|
||||
#define jffs2_nor_ecc_flash_setup(c) (0)
|
||||
#define jffs2_nor_ecc_flash_cleanup(c) do {} while (0)
|
||||
#define jffs2_dataflash_setup(c) (0)
|
||||
#define jffs2_dataflash_cleanup(c) do {} while (0)
|
||||
|
||||
#else /* NAND and/or ECC'd NOR support present */
|
||||
|
||||
|
@ -7,7 +7,7 @@
|
||||
*
|
||||
* For licensing information, see the file 'LICENCE' in this directory.
|
||||
*
|
||||
* $Id: readinode.c,v 1.120 2005/07/05 21:03:07 dwmw2 Exp $
|
||||
* $Id: readinode.c,v 1.125 2005/07/10 13:13:55 dedekind Exp $
|
||||
*
|
||||
*/
|
||||
|
||||
@ -151,6 +151,9 @@ int jffs2_add_full_dnode_to_inode(struct jffs2_sb_info *c, struct jffs2_inode_in
|
||||
|
||||
D1(printk(KERN_DEBUG "jffs2_add_full_dnode_to_inode(ino #%u, f %p, fn %p)\n", f->inocache->ino, f, fn));
|
||||
|
||||
if (unlikely(!fn->size))
|
||||
return 0;
|
||||
|
||||
newfrag = jffs2_alloc_node_frag();
|
||||
if (unlikely(!newfrag))
|
||||
return -ENOMEM;
|
||||
@ -158,11 +161,6 @@ int jffs2_add_full_dnode_to_inode(struct jffs2_sb_info *c, struct jffs2_inode_in
|
||||
D2(printk(KERN_DEBUG "adding node %04x-%04x @0x%08x on flash, newfrag *%p\n",
|
||||
fn->ofs, fn->ofs+fn->size, ref_offset(fn->raw), newfrag));
|
||||
|
||||
if (unlikely(!fn->size)) {
|
||||
jffs2_free_node_frag(newfrag);
|
||||
return 0;
|
||||
}
|
||||
|
||||
newfrag->ofs = fn->ofs;
|
||||
newfrag->size = fn->size;
|
||||
newfrag->node = fn;
|
||||
@ -560,7 +558,6 @@ static int jffs2_do_read_inode_internal(struct jffs2_sb_info *c,
|
||||
}
|
||||
next_tn:
|
||||
BUG_ON(rb->rb_left);
|
||||
repl_rb = NULL;
|
||||
if (rb->rb_parent && rb->rb_parent->rb_left == rb) {
|
||||
/* We were then left-hand child of our parent. We need
|
||||
to move our own right-hand child into our place. */
|
||||
|
@ -7,7 +7,7 @@
|
||||
*
|
||||
* For licensing information, see the file 'LICENCE' in this directory.
|
||||
*
|
||||
* $Id: super.c,v 1.106 2005/05/18 11:37:25 dedekind Exp $
|
||||
* $Id: super.c,v 1.107 2005/07/12 16:37:08 dedekind Exp $
|
||||
*
|
||||
*/
|
||||
|
||||
@ -140,6 +140,15 @@ static struct super_block *jffs2_get_sb_mtd(struct file_system_type *fs_type,
|
||||
D1(printk(KERN_DEBUG "jffs2_get_sb_mtd(): New superblock for device %d (\"%s\")\n",
|
||||
mtd->index, mtd->name));
|
||||
|
||||
/* Initialize JFFS2 superblock locks, the further initialization will be
|
||||
* done later */
|
||||
init_MUTEX(&c->alloc_sem);
|
||||
init_MUTEX(&c->erase_free_sem);
|
||||
init_waitqueue_head(&c->erase_wait);
|
||||
init_waitqueue_head(&c->inocache_wq);
|
||||
spin_lock_init(&c->erase_completion_lock);
|
||||
spin_lock_init(&c->inocache_lock);
|
||||
|
||||
sb->s_op = &jffs2_super_operations;
|
||||
sb->s_flags = flags | MS_NOATIME;
|
||||
|
||||
|
@ -25,36 +25,6 @@
|
||||
#include "jfs_metapage.h"
|
||||
#include "jfs_debug.h"
|
||||
|
||||
/*
|
||||
* Debug code for double-checking block map
|
||||
*/
|
||||
/* #define _JFS_DEBUG_DMAP 1 */
|
||||
|
||||
#ifdef _JFS_DEBUG_DMAP
|
||||
#define DBINITMAP(size,ipbmap,results) \
|
||||
DBinitmap(size,ipbmap,results)
|
||||
#define DBALLOC(dbmap,mapsize,blkno,nblocks) \
|
||||
DBAlloc(dbmap,mapsize,blkno,nblocks)
|
||||
#define DBFREE(dbmap,mapsize,blkno,nblocks) \
|
||||
DBFree(dbmap,mapsize,blkno,nblocks)
|
||||
#define DBALLOCCK(dbmap,mapsize,blkno,nblocks) \
|
||||
DBAllocCK(dbmap,mapsize,blkno,nblocks)
|
||||
#define DBFREECK(dbmap,mapsize,blkno,nblocks) \
|
||||
DBFreeCK(dbmap,mapsize,blkno,nblocks)
|
||||
|
||||
static void DBinitmap(s64, struct inode *, u32 **);
|
||||
static void DBAlloc(uint *, s64, s64, s64);
|
||||
static void DBFree(uint *, s64, s64, s64);
|
||||
static void DBAllocCK(uint *, s64, s64, s64);
|
||||
static void DBFreeCK(uint *, s64, s64, s64);
|
||||
#else
|
||||
#define DBINITMAP(size,ipbmap,results)
|
||||
#define DBALLOC(dbmap, mapsize, blkno, nblocks)
|
||||
#define DBFREE(dbmap, mapsize, blkno, nblocks)
|
||||
#define DBALLOCCK(dbmap, mapsize, blkno, nblocks)
|
||||
#define DBFREECK(dbmap, mapsize, blkno, nblocks)
|
||||
#endif /* _JFS_DEBUG_DMAP */
|
||||
|
||||
/*
|
||||
* SERIALIZATION of the Block Allocation Map.
|
||||
*
|
||||
@ -242,7 +212,6 @@ int dbMount(struct inode *ipbmap)
|
||||
JFS_SBI(ipbmap->i_sb)->bmap = bmp;
|
||||
|
||||
memset(bmp->db_active, 0, sizeof(bmp->db_active));
|
||||
DBINITMAP(bmp->db_mapsize, ipbmap, &bmp->db_DBmap);
|
||||
|
||||
/*
|
||||
* allocate/initialize the bmap lock
|
||||
@ -407,16 +376,12 @@ int dbFree(struct inode *ip, s64 blkno, s64 nblocks)
|
||||
*/
|
||||
nb = min(rem, BPERDMAP - (blkno & (BPERDMAP - 1)));
|
||||
|
||||
DBALLOCCK(bmp->db_DBmap, bmp->db_mapsize, blkno, nb);
|
||||
|
||||
/* free the blocks. */
|
||||
if ((rc = dbFreeDmap(bmp, dp, blkno, nb))) {
|
||||
release_metapage(mp);
|
||||
IREAD_UNLOCK(ipbmap);
|
||||
return (rc);
|
||||
}
|
||||
|
||||
DBFREE(bmp->db_DBmap, bmp->db_mapsize, blkno, nb);
|
||||
}
|
||||
|
||||
/* write the last buffer. */
|
||||
@ -775,10 +740,6 @@ int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
|
||||
IWRITE_LOCK(ipbmap);
|
||||
|
||||
rc = dbAllocAny(bmp, nblocks, l2nb, results);
|
||||
if (rc == 0) {
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize, *results,
|
||||
nblocks);
|
||||
}
|
||||
|
||||
goto write_unlock;
|
||||
}
|
||||
@ -836,8 +797,6 @@ int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
|
||||
!= -ENOSPC) {
|
||||
if (rc == 0) {
|
||||
*results = blkno;
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize,
|
||||
*results, nblocks);
|
||||
mark_metapage_dirty(mp);
|
||||
}
|
||||
|
||||
@ -863,11 +822,8 @@ int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
|
||||
if ((rc =
|
||||
dbAllocNear(bmp, dp, blkno, (int) nblocks, l2nb, results))
|
||||
!= -ENOSPC) {
|
||||
if (rc == 0) {
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize,
|
||||
*results, nblocks);
|
||||
if (rc == 0)
|
||||
mark_metapage_dirty(mp);
|
||||
}
|
||||
|
||||
release_metapage(mp);
|
||||
goto read_unlock;
|
||||
@ -878,11 +834,8 @@ int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
|
||||
*/
|
||||
if ((rc = dbAllocDmapLev(bmp, dp, (int) nblocks, l2nb, results))
|
||||
!= -ENOSPC) {
|
||||
if (rc == 0) {
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize,
|
||||
*results, nblocks);
|
||||
if (rc == 0)
|
||||
mark_metapage_dirty(mp);
|
||||
}
|
||||
|
||||
release_metapage(mp);
|
||||
goto read_unlock;
|
||||
@ -896,13 +849,9 @@ int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
|
||||
* the same allocation group as the hint.
|
||||
*/
|
||||
IWRITE_LOCK(ipbmap);
|
||||
if ((rc = dbAllocAG(bmp, agno, nblocks, l2nb, results))
|
||||
!= -ENOSPC) {
|
||||
if (rc == 0)
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize,
|
||||
*results, nblocks);
|
||||
if ((rc = dbAllocAG(bmp, agno, nblocks, l2nb, results)) != -ENOSPC)
|
||||
goto write_unlock;
|
||||
}
|
||||
|
||||
IWRITE_UNLOCK(ipbmap);
|
||||
|
||||
|
||||
@ -918,9 +867,6 @@ int dbAlloc(struct inode *ip, s64 hint, s64 nblocks, s64 * results)
|
||||
*/
|
||||
if ((rc = dbAllocAG(bmp, agno, nblocks, l2nb, results)) == -ENOSPC)
|
||||
rc = dbAllocAny(bmp, nblocks, l2nb, results);
|
||||
if (rc == 0) {
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize, *results, nblocks);
|
||||
}
|
||||
|
||||
write_unlock:
|
||||
IWRITE_UNLOCK(ipbmap);
|
||||
@ -992,10 +938,9 @@ int dbAllocExact(struct inode *ip, s64 blkno, int nblocks)
|
||||
|
||||
IREAD_UNLOCK(ipbmap);
|
||||
|
||||
if (rc == 0) {
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize, blkno, nblocks);
|
||||
if (rc == 0)
|
||||
mark_metapage_dirty(mp);
|
||||
}
|
||||
|
||||
release_metapage(mp);
|
||||
|
||||
return (rc);
|
||||
@ -1144,7 +1089,6 @@ static int dbExtend(struct inode *ip, s64 blkno, s64 nblocks, s64 addnblocks)
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
DBALLOCCK(bmp->db_DBmap, bmp->db_mapsize, blkno, nblocks);
|
||||
dp = (struct dmap *) mp->data;
|
||||
|
||||
/* try to allocate the blocks immediately following the
|
||||
@ -1155,11 +1099,9 @@ static int dbExtend(struct inode *ip, s64 blkno, s64 nblocks, s64 addnblocks)
|
||||
IREAD_UNLOCK(ipbmap);
|
||||
|
||||
/* were we successful ? */
|
||||
if (rc == 0) {
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize, extblkno,
|
||||
addnblocks);
|
||||
if (rc == 0)
|
||||
write_metapage(mp);
|
||||
} else
|
||||
else
|
||||
/* we were not successful */
|
||||
release_metapage(mp);
|
||||
|
||||
@ -3185,16 +3127,12 @@ int dbAllocBottomUp(struct inode *ip, s64 blkno, s64 nblocks)
|
||||
*/
|
||||
nb = min(rem, BPERDMAP - (blkno & (BPERDMAP - 1)));
|
||||
|
||||
DBFREECK(bmp->db_DBmap, bmp->db_mapsize, blkno, nb);
|
||||
|
||||
/* allocate the blocks. */
|
||||
if ((rc = dbAllocDmapBU(bmp, dp, blkno, nb))) {
|
||||
release_metapage(mp);
|
||||
IREAD_UNLOCK(ipbmap);
|
||||
return (rc);
|
||||
}
|
||||
|
||||
DBALLOC(bmp->db_DBmap, bmp->db_mapsize, blkno, nb);
|
||||
}
|
||||
|
||||
/* write the last buffer. */
|
||||
@ -4041,223 +3979,3 @@ s64 dbMapFileSizeToMapSize(struct inode * ipbmap)
|
||||
|
||||
return (nblocks);
|
||||
}
|
||||
|
||||
|
||||
#ifdef _JFS_DEBUG_DMAP
|
||||
/*
|
||||
* DBinitmap()
|
||||
*/
|
||||
static void DBinitmap(s64 size, struct inode *ipbmap, u32 ** results)
|
||||
{
|
||||
int npages;
|
||||
u32 *dbmap, *d;
|
||||
int n;
|
||||
s64 lblkno, cur_block;
|
||||
struct dmap *dp;
|
||||
struct metapage *mp;
|
||||
|
||||
npages = size / 32768;
|
||||
npages += (size % 32768) ? 1 : 0;
|
||||
|
||||
dbmap = (u32 *) xmalloc(npages * 4096, L2PSIZE, kernel_heap);
|
||||
if (dbmap == NULL)
|
||||
BUG(); /* Not robust since this is only unused debug code */
|
||||
|
||||
for (n = 0, d = dbmap; n < npages; n++, d += 1024)
|
||||
bzero(d, 4096);
|
||||
|
||||
/* Need to initialize from disk map pages
|
||||
*/
|
||||
for (d = dbmap, cur_block = 0; cur_block < size;
|
||||
cur_block += BPERDMAP, d += LPERDMAP) {
|
||||
lblkno = BLKTODMAP(cur_block,
|
||||
JFS_SBI(ipbmap->i_sb)->bmap->
|
||||
db_l2nbperpage);
|
||||
mp = read_metapage(ipbmap, lblkno, PSIZE, 0);
|
||||
if (mp == NULL) {
|
||||
jfs_error(ipbmap->i_sb,
|
||||
"DBinitmap: could not read disk map page");
|
||||
continue;
|
||||
}
|
||||
dp = (struct dmap *) mp->data;
|
||||
|
||||
for (n = 0; n < LPERDMAP; n++)
|
||||
d[n] = le32_to_cpu(dp->wmap[n]);
|
||||
|
||||
release_metapage(mp);
|
||||
}
|
||||
|
||||
*results = dbmap;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* DBAlloc()
|
||||
*/
|
||||
void DBAlloc(uint * dbmap, s64 mapsize, s64 blkno, s64 nblocks)
|
||||
{
|
||||
int word, nb, bitno;
|
||||
u32 mask;
|
||||
|
||||
assert(blkno > 0 && blkno < mapsize);
|
||||
assert(nblocks > 0 && nblocks <= mapsize);
|
||||
|
||||
assert(blkno + nblocks <= mapsize);
|
||||
|
||||
dbmap += (blkno / 32);
|
||||
while (nblocks > 0) {
|
||||
bitno = blkno & (32 - 1);
|
||||
nb = min(nblocks, 32 - bitno);
|
||||
|
||||
mask = (0xffffffff << (32 - nb) >> bitno);
|
||||
assert((mask & *dbmap) == 0);
|
||||
*dbmap |= mask;
|
||||
|
||||
dbmap++;
|
||||
blkno += nb;
|
||||
nblocks -= nb;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* DBFree()
|
||||
*/
|
||||
static void DBFree(uint * dbmap, s64 mapsize, s64 blkno, s64 nblocks)
|
||||
{
|
||||
int word, nb, bitno;
|
||||
u32 mask;
|
||||
|
||||
assert(blkno > 0 && blkno < mapsize);
|
||||
assert(nblocks > 0 && nblocks <= mapsize);
|
||||
|
||||
assert(blkno + nblocks <= mapsize);
|
||||
|
||||
dbmap += (blkno / 32);
|
||||
while (nblocks > 0) {
|
||||
bitno = blkno & (32 - 1);
|
||||
nb = min(nblocks, 32 - bitno);
|
||||
|
||||
mask = (0xffffffff << (32 - nb) >> bitno);
|
||||
assert((mask & *dbmap) == mask);
|
||||
*dbmap &= ~mask;
|
||||
|
||||
dbmap++;
|
||||
blkno += nb;
|
||||
nblocks -= nb;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* DBAllocCK()
|
||||
*/
|
||||
static void DBAllocCK(uint * dbmap, s64 mapsize, s64 blkno, s64 nblocks)
|
||||
{
|
||||
int word, nb, bitno;
|
||||
u32 mask;
|
||||
|
||||
assert(blkno > 0 && blkno < mapsize);
|
||||
assert(nblocks > 0 && nblocks <= mapsize);
|
||||
|
||||
assert(blkno + nblocks <= mapsize);
|
||||
|
||||
dbmap += (blkno / 32);
|
||||
while (nblocks > 0) {
|
||||
bitno = blkno & (32 - 1);
|
||||
nb = min(nblocks, 32 - bitno);
|
||||
|
||||
mask = (0xffffffff << (32 - nb) >> bitno);
|
||||
assert((mask & *dbmap) == mask);
|
||||
|
||||
dbmap++;
|
||||
blkno += nb;
|
||||
nblocks -= nb;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* DBFreeCK()
|
||||
*/
|
||||
static void DBFreeCK(uint * dbmap, s64 mapsize, s64 blkno, s64 nblocks)
|
||||
{
|
||||
int word, nb, bitno;
|
||||
u32 mask;
|
||||
|
||||
assert(blkno > 0 && blkno < mapsize);
|
||||
assert(nblocks > 0 && nblocks <= mapsize);
|
||||
|
||||
assert(blkno + nblocks <= mapsize);
|
||||
|
||||
dbmap += (blkno / 32);
|
||||
while (nblocks > 0) {
|
||||
bitno = blkno & (32 - 1);
|
||||
nb = min(nblocks, 32 - bitno);
|
||||
|
||||
mask = (0xffffffff << (32 - nb) >> bitno);
|
||||
assert((mask & *dbmap) == 0);
|
||||
|
||||
dbmap++;
|
||||
blkno += nb;
|
||||
nblocks -= nb;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* dbPrtMap()
|
||||
*/
|
||||
static void dbPrtMap(struct bmap * bmp)
|
||||
{
|
||||
printk(" mapsize: %d%d\n", bmp->db_mapsize);
|
||||
printk(" nfree: %d%d\n", bmp->db_nfree);
|
||||
printk(" numag: %d\n", bmp->db_numag);
|
||||
printk(" agsize: %d%d\n", bmp->db_agsize);
|
||||
printk(" agl2size: %d\n", bmp->db_agl2size);
|
||||
printk(" agwidth: %d\n", bmp->db_agwidth);
|
||||
printk(" agstart: %d\n", bmp->db_agstart);
|
||||
printk(" agheigth: %d\n", bmp->db_agheigth);
|
||||
printk(" aglevel: %d\n", bmp->db_aglevel);
|
||||
printk(" maxlevel: %d\n", bmp->db_maxlevel);
|
||||
printk(" maxag: %d\n", bmp->db_maxag);
|
||||
printk(" agpref: %d\n", bmp->db_agpref);
|
||||
printk(" l2nbppg: %d\n", bmp->db_l2nbperpage);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* dbPrtCtl()
|
||||
*/
|
||||
static void dbPrtCtl(struct dmapctl * dcp)
|
||||
{
|
||||
int i, j, n;
|
||||
|
||||
printk(" height: %08x\n", le32_to_cpu(dcp->height));
|
||||
printk(" leafidx: %08x\n", le32_to_cpu(dcp->leafidx));
|
||||
printk(" budmin: %08x\n", dcp->budmin);
|
||||
printk(" nleafs: %08x\n", le32_to_cpu(dcp->nleafs));
|
||||
printk(" l2nleafs: %08x\n", le32_to_cpu(dcp->l2nleafs));
|
||||
|
||||
printk("\n Tree:\n");
|
||||
for (i = 0; i < CTLLEAFIND; i += 8) {
|
||||
n = min(8, CTLLEAFIND - i);
|
||||
|
||||
for (j = 0; j < n; j++)
|
||||
printf(" [%03x]: %02x", i + j,
|
||||
(char) dcp->stree[i + j]);
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
printk("\n Tree Leaves:\n");
|
||||
for (i = 0; i < LPERCTL; i += 8) {
|
||||
n = min(8, LPERCTL - i);
|
||||
|
||||
for (j = 0; j < n; j++)
|
||||
printf(" [%03x]: %02x",
|
||||
i + j,
|
||||
(char) dcp->stree[i + j + CTLLEAFIND]);
|
||||
printf("\n");
|
||||
}
|
||||
}
|
||||
#endif /* _JFS_DEBUG_DMAP */
|
||||
|
@ -4554,202 +4554,3 @@ int dtModify(tid_t tid, struct inode *ip,
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef _JFS_DEBUG_DTREE
|
||||
/*
|
||||
* dtDisplayTree()
|
||||
*
|
||||
* function: traverse forward
|
||||
*/
|
||||
int dtDisplayTree(struct inode *ip)
|
||||
{
|
||||
int rc;
|
||||
struct metapage *mp;
|
||||
dtpage_t *p;
|
||||
s64 bn, pbn;
|
||||
int index, lastindex, v, h;
|
||||
pxd_t *xd;
|
||||
struct btstack btstack;
|
||||
struct btframe *btsp;
|
||||
struct btframe *parent;
|
||||
u8 *stbl;
|
||||
int psize = 256;
|
||||
|
||||
printk("display B+-tree.\n");
|
||||
|
||||
/* clear stack */
|
||||
btsp = btstack.stack;
|
||||
|
||||
/*
|
||||
* start with root
|
||||
*
|
||||
* root resides in the inode
|
||||
*/
|
||||
bn = 0;
|
||||
v = h = 0;
|
||||
|
||||
/*
|
||||
* first access of each page:
|
||||
*/
|
||||
newPage:
|
||||
DT_GETPAGE(ip, bn, mp, psize, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* process entries forward from first index */
|
||||
index = 0;
|
||||
lastindex = p->header.nextindex - 1;
|
||||
|
||||
if (p->header.flag & BT_INTERNAL) {
|
||||
/*
|
||||
* first access of each internal page
|
||||
*/
|
||||
printf("internal page ");
|
||||
dtDisplayPage(ip, bn, p);
|
||||
|
||||
goto getChild;
|
||||
} else { /* (p->header.flag & BT_LEAF) */
|
||||
|
||||
/*
|
||||
* first access of each leaf page
|
||||
*/
|
||||
printf("leaf page ");
|
||||
dtDisplayPage(ip, bn, p);
|
||||
|
||||
/*
|
||||
* process leaf page entries
|
||||
*
|
||||
for ( ; index <= lastindex; index++)
|
||||
{
|
||||
}
|
||||
*/
|
||||
|
||||
/* unpin the leaf page */
|
||||
DT_PUTPAGE(mp);
|
||||
}
|
||||
|
||||
/*
|
||||
* go back up to the parent page
|
||||
*/
|
||||
getParent:
|
||||
/* pop/restore parent entry for the current child page */
|
||||
if ((parent = (btsp == btstack.stack ? NULL : --btsp)) == NULL)
|
||||
/* current page must have been root */
|
||||
return;
|
||||
|
||||
/*
|
||||
* parent page scan completed
|
||||
*/
|
||||
if ((index = parent->index) == (lastindex = parent->lastindex)) {
|
||||
/* go back up to the parent page */
|
||||
goto getParent;
|
||||
}
|
||||
|
||||
/*
|
||||
* parent page has entries remaining
|
||||
*/
|
||||
/* get back the parent page */
|
||||
bn = parent->bn;
|
||||
/* v = parent->level; */
|
||||
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* get next parent entry */
|
||||
index++;
|
||||
|
||||
/*
|
||||
* internal page: go down to child page of current entry
|
||||
*/
|
||||
getChild:
|
||||
/* push/save current parent entry for the child page */
|
||||
btsp->bn = pbn = bn;
|
||||
btsp->index = index;
|
||||
btsp->lastindex = lastindex;
|
||||
/* btsp->level = v; */
|
||||
/* btsp->node = h; */
|
||||
++btsp;
|
||||
|
||||
/* get current entry for the child page */
|
||||
stbl = DT_GETSTBL(p);
|
||||
xd = (pxd_t *) & p->slot[stbl[index]];
|
||||
|
||||
/*
|
||||
* first access of each internal entry:
|
||||
*/
|
||||
|
||||
/* get child page */
|
||||
bn = addressPXD(xd);
|
||||
psize = lengthPXD(xd) << ip->i_ipmnt->i_l2bsize;
|
||||
|
||||
printk("traverse down 0x%Lx[%d]->0x%Lx\n", pbn, index, bn);
|
||||
v++;
|
||||
h = index;
|
||||
|
||||
/* release parent page */
|
||||
DT_PUTPAGE(mp);
|
||||
|
||||
/* process the child page */
|
||||
goto newPage;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* dtDisplayPage()
|
||||
*
|
||||
* function: display page
|
||||
*/
|
||||
int dtDisplayPage(struct inode *ip, s64 bn, dtpage_t * p)
|
||||
{
|
||||
int rc;
|
||||
struct metapage *mp;
|
||||
struct ldtentry *lh;
|
||||
struct idtentry *ih;
|
||||
pxd_t *xd;
|
||||
int i, j;
|
||||
u8 *stbl;
|
||||
wchar_t name[JFS_NAME_MAX + 1];
|
||||
struct component_name key = { 0, name };
|
||||
int freepage = 0;
|
||||
|
||||
if (p == NULL) {
|
||||
freepage = 1;
|
||||
DT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* display page control */
|
||||
printk("bn:0x%Lx flag:0x%08x nextindex:%d\n",
|
||||
bn, p->header.flag, p->header.nextindex);
|
||||
|
||||
/* display entries */
|
||||
stbl = DT_GETSTBL(p);
|
||||
for (i = 0, j = 1; i < p->header.nextindex; i++, j++) {
|
||||
dtGetKey(p, i, &key, JFS_SBI(ip->i_sb)->mntflag);
|
||||
key.name[key.namlen] = '\0';
|
||||
if (p->header.flag & BT_LEAF) {
|
||||
lh = (struct ldtentry *) & p->slot[stbl[i]];
|
||||
printf("\t[%d] %s:%d", i, key.name,
|
||||
le32_to_cpu(lh->inumber));
|
||||
} else {
|
||||
ih = (struct idtentry *) & p->slot[stbl[i]];
|
||||
xd = (pxd_t *) ih;
|
||||
bn = addressPXD(xd);
|
||||
printf("\t[%d] %s:0x%Lx", i, key.name, bn);
|
||||
}
|
||||
|
||||
if (j == 4) {
|
||||
printf("\n");
|
||||
j = 0;
|
||||
}
|
||||
}
|
||||
|
||||
printf("\n");
|
||||
|
||||
if (freepage)
|
||||
DT_PUTPAGE(mp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif /* _JFS_DEBUG_DTREE */
|
||||
|
@ -269,11 +269,4 @@ extern int dtModify(tid_t tid, struct inode *ip, struct component_name * key,
|
||||
ino_t * orig_ino, ino_t new_ino, int flag);
|
||||
|
||||
extern int jfs_readdir(struct file *filp, void *dirent, filldir_t filldir);
|
||||
|
||||
#ifdef _JFS_DEBUG_DTREE
|
||||
extern int dtDisplayTree(struct inode *ip);
|
||||
|
||||
extern int dtDisplayPage(struct inode *ip, s64 bn, dtpage_t * p);
|
||||
#endif /* _JFS_DEBUG_DTREE */
|
||||
|
||||
#endif /* !_H_JFS_DTREE */
|
||||
|
@ -86,25 +86,6 @@ static int diIAGRead(struct inomap * imap, int, struct metapage **);
|
||||
static int copy_from_dinode(struct dinode *, struct inode *);
|
||||
static void copy_to_dinode(struct dinode *, struct inode *);
|
||||
|
||||
/*
|
||||
* debug code for double-checking inode map
|
||||
*/
|
||||
/* #define _JFS_DEBUG_IMAP 1 */
|
||||
|
||||
#ifdef _JFS_DEBUG_IMAP
|
||||
#define DBG_DIINIT(imap) DBGdiInit(imap)
|
||||
#define DBG_DIALLOC(imap, ino) DBGdiAlloc(imap, ino)
|
||||
#define DBG_DIFREE(imap, ino) DBGdiFree(imap, ino)
|
||||
|
||||
static void *DBGdiInit(struct inomap * imap);
|
||||
static void DBGdiAlloc(struct inomap * imap, ino_t ino);
|
||||
static void DBGdiFree(struct inomap * imap, ino_t ino);
|
||||
#else
|
||||
#define DBG_DIINIT(imap)
|
||||
#define DBG_DIALLOC(imap, ino)
|
||||
#define DBG_DIFREE(imap, ino)
|
||||
#endif /* _JFS_DEBUG_IMAP */
|
||||
|
||||
/*
|
||||
* NAME: diMount()
|
||||
*
|
||||
@ -188,8 +169,6 @@ int diMount(struct inode *ipimap)
|
||||
imap->im_ipimap = ipimap;
|
||||
JFS_IP(ipimap)->i_imap = imap;
|
||||
|
||||
// DBG_DIINIT(imap);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
@ -1043,7 +1022,6 @@ int diFree(struct inode *ip)
|
||||
/* update the bitmap.
|
||||
*/
|
||||
iagp->wmap[extno] = cpu_to_le32(bitmap);
|
||||
DBG_DIFREE(imap, inum);
|
||||
|
||||
/* update the free inode counts at the iag, ag and
|
||||
* map level.
|
||||
@ -1231,7 +1209,6 @@ int diFree(struct inode *ip)
|
||||
jfs_error(ip->i_sb, "diFree: the pmap does not show inode free");
|
||||
}
|
||||
iagp->wmap[extno] = 0;
|
||||
DBG_DIFREE(imap, inum);
|
||||
PXDlength(&iagp->inoext[extno], 0);
|
||||
PXDaddress(&iagp->inoext[extno], 0);
|
||||
|
||||
@ -1350,7 +1327,6 @@ diInitInode(struct inode *ip, int iagno, int ino, int extno, struct iag * iagp)
|
||||
struct jfs_inode_info *jfs_ip = JFS_IP(ip);
|
||||
|
||||
ip->i_ino = (iagno << L2INOSPERIAG) + ino;
|
||||
DBG_DIALLOC(JFS_IP(ipimap)->i_imap, ip->i_ino);
|
||||
jfs_ip->ixpxd = iagp->inoext[extno];
|
||||
jfs_ip->agno = BLKTOAG(le64_to_cpu(iagp->agstart), sbi);
|
||||
jfs_ip->active_ag = -1;
|
||||
@ -3185,84 +3161,3 @@ static void copy_to_dinode(struct dinode * dip, struct inode *ip)
|
||||
if (S_ISCHR(ip->i_mode) || S_ISBLK(ip->i_mode))
|
||||
dip->di_rdev = cpu_to_le32(jfs_ip->dev);
|
||||
}
|
||||
|
||||
#ifdef _JFS_DEBUG_IMAP
|
||||
/*
|
||||
* DBGdiInit()
|
||||
*/
|
||||
static void *DBGdiInit(struct inomap * imap)
|
||||
{
|
||||
u32 *dimap;
|
||||
int size;
|
||||
size = 64 * 1024;
|
||||
if ((dimap = (u32 *) xmalloc(size, L2PSIZE, kernel_heap)) == NULL)
|
||||
assert(0);
|
||||
bzero((void *) dimap, size);
|
||||
imap->im_DBGdimap = dimap;
|
||||
}
|
||||
|
||||
/*
|
||||
* DBGdiAlloc()
|
||||
*/
|
||||
static void DBGdiAlloc(struct inomap * imap, ino_t ino)
|
||||
{
|
||||
u32 *dimap = imap->im_DBGdimap;
|
||||
int w, b;
|
||||
u32 m;
|
||||
w = ino >> 5;
|
||||
b = ino & 31;
|
||||
m = 0x80000000 >> b;
|
||||
assert(w < 64 * 256);
|
||||
if (dimap[w] & m) {
|
||||
printk("DEBUG diAlloc: duplicate alloc ino:0x%x\n", ino);
|
||||
}
|
||||
dimap[w] |= m;
|
||||
}
|
||||
|
||||
/*
|
||||
* DBGdiFree()
|
||||
*/
|
||||
static void DBGdiFree(struct inomap * imap, ino_t ino)
|
||||
{
|
||||
u32 *dimap = imap->im_DBGdimap;
|
||||
int w, b;
|
||||
u32 m;
|
||||
w = ino >> 5;
|
||||
b = ino & 31;
|
||||
m = 0x80000000 >> b;
|
||||
assert(w < 64 * 256);
|
||||
if ((dimap[w] & m) == 0) {
|
||||
printk("DEBUG diFree: duplicate free ino:0x%x\n", ino);
|
||||
}
|
||||
dimap[w] &= ~m;
|
||||
}
|
||||
|
||||
static void dump_cp(struct inomap * ipimap, char *function, int line)
|
||||
{
|
||||
printk("\n* ********* *\nControl Page %s %d\n", function, line);
|
||||
printk("FreeIAG %d\tNextIAG %d\n", ipimap->im_freeiag,
|
||||
ipimap->im_nextiag);
|
||||
printk("NumInos %d\tNumFree %d\n",
|
||||
atomic_read(&ipimap->im_numinos),
|
||||
atomic_read(&ipimap->im_numfree));
|
||||
printk("AG InoFree %d\tAG ExtFree %d\n",
|
||||
ipimap->im_agctl[0].inofree, ipimap->im_agctl[0].extfree);
|
||||
printk("AG NumInos %d\tAG NumFree %d\n",
|
||||
ipimap->im_agctl[0].numinos, ipimap->im_agctl[0].numfree);
|
||||
}
|
||||
|
||||
static void dump_iag(struct iag * iag, char *function, int line)
|
||||
{
|
||||
printk("\n* ********* *\nIAG %s %d\n", function, line);
|
||||
printk("IagNum %d\tIAG Free %d\n", le32_to_cpu(iag->iagnum),
|
||||
le32_to_cpu(iag->iagfree));
|
||||
printk("InoFreeFwd %d\tInoFreeBack %d\n",
|
||||
le32_to_cpu(iag->inofreefwd),
|
||||
le32_to_cpu(iag->inofreeback));
|
||||
printk("ExtFreeFwd %d\tExtFreeBack %d\n",
|
||||
le32_to_cpu(iag->extfreefwd),
|
||||
le32_to_cpu(iag->extfreeback));
|
||||
printk("NFreeInos %d\tNFreeExts %d\n", le32_to_cpu(iag->nfreeinos),
|
||||
le32_to_cpu(iag->nfreeexts));
|
||||
}
|
||||
#endif /* _JFS_DEBUG_IMAP */
|
||||
|
@ -51,8 +51,9 @@ int jfs_strfromUCS_le(char *to, const __le16 * from,
|
||||
}
|
||||
} else {
|
||||
for (i = 0; (i < len) && from[i]; i++) {
|
||||
if (le16_to_cpu(from[i]) & 0xff00) {
|
||||
if (warn) {
|
||||
if (unlikely(le16_to_cpu(from[i]) & 0xff00)) {
|
||||
to[i] = '?';
|
||||
if (unlikely(warn)) {
|
||||
warn--;
|
||||
warn_again--;
|
||||
printk(KERN_ERR
|
||||
@ -61,7 +62,7 @@ int jfs_strfromUCS_le(char *to, const __le16 * from,
|
||||
printk(KERN_ERR
|
||||
"mount with iocharset=utf8 to access\n");
|
||||
}
|
||||
to[i] = '?';
|
||||
|
||||
}
|
||||
else
|
||||
to[i] = (char) (le16_to_cpu(from[i]));
|
||||
|
@ -135,14 +135,6 @@ static int xtSearchNode(struct inode *ip,
|
||||
static int xtRelink(tid_t tid, struct inode *ip, xtpage_t * fp);
|
||||
#endif /* _STILL_TO_PORT */
|
||||
|
||||
/* External references */
|
||||
|
||||
/*
|
||||
* debug control
|
||||
*/
|
||||
/* #define _JFS_DEBUG_XTREE 1 */
|
||||
|
||||
|
||||
/*
|
||||
* xtLookup()
|
||||
*
|
||||
@ -4140,338 +4132,6 @@ s64 xtTruncate_pmap(tid_t tid, struct inode *ip, s64 committed_size)
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
#ifdef _JFS_DEBUG_XTREE
|
||||
/*
|
||||
* xtDisplayTree()
|
||||
*
|
||||
* function: traverse forward
|
||||
*/
|
||||
int xtDisplayTree(struct inode *ip)
|
||||
{
|
||||
int rc = 0;
|
||||
struct metapage *mp;
|
||||
xtpage_t *p;
|
||||
s64 bn, pbn;
|
||||
int index, lastindex, v, h;
|
||||
xad_t *xad;
|
||||
struct btstack btstack;
|
||||
struct btframe *btsp;
|
||||
struct btframe *parent;
|
||||
|
||||
printk("display B+-tree.\n");
|
||||
|
||||
/* clear stack */
|
||||
btsp = btstack.stack;
|
||||
|
||||
/*
|
||||
* start with root
|
||||
*
|
||||
* root resides in the inode
|
||||
*/
|
||||
bn = 0;
|
||||
v = h = 0;
|
||||
|
||||
/*
|
||||
* first access of each page:
|
||||
*/
|
||||
getPage:
|
||||
XT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* process entries forward from first index */
|
||||
index = XTENTRYSTART;
|
||||
lastindex = le16_to_cpu(p->header.nextindex) - 1;
|
||||
|
||||
if (p->header.flag & BT_INTERNAL) {
|
||||
/*
|
||||
* first access of each internal page
|
||||
*/
|
||||
goto getChild;
|
||||
} else { /* (p->header.flag & BT_LEAF) */
|
||||
|
||||
/*
|
||||
* first access of each leaf page
|
||||
*/
|
||||
printf("leaf page ");
|
||||
xtDisplayPage(ip, bn, p);
|
||||
|
||||
/* unpin the leaf page */
|
||||
XT_PUTPAGE(mp);
|
||||
}
|
||||
|
||||
/*
|
||||
* go back up to the parent page
|
||||
*/
|
||||
getParent:
|
||||
/* pop/restore parent entry for the current child page */
|
||||
if ((parent = (btsp == btstack.stack ? NULL : --btsp)) == NULL)
|
||||
/* current page must have been root */
|
||||
return;
|
||||
|
||||
/*
|
||||
* parent page scan completed
|
||||
*/
|
||||
if ((index = parent->index) == (lastindex = parent->lastindex)) {
|
||||
/* go back up to the parent page */
|
||||
goto getParent;
|
||||
}
|
||||
|
||||
/*
|
||||
* parent page has entries remaining
|
||||
*/
|
||||
/* get back the parent page */
|
||||
bn = parent->bn;
|
||||
/* v = parent->level; */
|
||||
XT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* get next parent entry */
|
||||
index++;
|
||||
|
||||
/*
|
||||
* internal page: go down to child page of current entry
|
||||
*/
|
||||
getChild:
|
||||
/* push/save current parent entry for the child page */
|
||||
btsp->bn = pbn = bn;
|
||||
btsp->index = index;
|
||||
btsp->lastindex = lastindex;
|
||||
/* btsp->level = v; */
|
||||
/* btsp->node = h; */
|
||||
++btsp;
|
||||
|
||||
/* get child page */
|
||||
xad = &p->xad[index];
|
||||
bn = addressXAD(xad);
|
||||
|
||||
/*
|
||||
* first access of each internal entry:
|
||||
*/
|
||||
/* release parent page */
|
||||
XT_PUTPAGE(mp);
|
||||
|
||||
printk("traverse down 0x%lx[%d]->0x%lx\n", (ulong) pbn, index,
|
||||
(ulong) bn);
|
||||
v++;
|
||||
h = index;
|
||||
|
||||
/* process the child page */
|
||||
goto getPage;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* xtDisplayPage()
|
||||
*
|
||||
* function: display page
|
||||
*/
|
||||
int xtDisplayPage(struct inode *ip, s64 bn, xtpage_t * p)
|
||||
{
|
||||
int rc = 0;
|
||||
xad_t *xad;
|
||||
s64 xaddr, xoff;
|
||||
int xlen, i, j;
|
||||
|
||||
/* display page control */
|
||||
printf("bn:0x%lx flag:0x%x nextindex:%d\n",
|
||||
(ulong) bn, p->header.flag,
|
||||
le16_to_cpu(p->header.nextindex));
|
||||
|
||||
/* display entries */
|
||||
xad = &p->xad[XTENTRYSTART];
|
||||
for (i = XTENTRYSTART, j = 1; i < le16_to_cpu(p->header.nextindex);
|
||||
i++, xad++, j++) {
|
||||
xoff = offsetXAD(xad);
|
||||
xaddr = addressXAD(xad);
|
||||
xlen = lengthXAD(xad);
|
||||
printf("\t[%d] 0x%lx:0x%lx(0x%x)", i, (ulong) xoff,
|
||||
(ulong) xaddr, xlen);
|
||||
|
||||
if (j == 4) {
|
||||
printf("\n");
|
||||
j = 0;
|
||||
}
|
||||
}
|
||||
|
||||
printf("\n");
|
||||
}
|
||||
#endif /* _JFS_DEBUG_XTREE */
|
||||
|
||||
|
||||
#ifdef _JFS_WIP
|
||||
/*
|
||||
* xtGather()
|
||||
*
|
||||
* function:
|
||||
* traverse for allocation acquiring tlock at commit time
|
||||
* (vs at the time of update) logging backward top down
|
||||
*
|
||||
* note:
|
||||
* problem - establishing that all new allocation have been
|
||||
* processed both for append and random write in sparse file
|
||||
* at the current entry at the current subtree root page
|
||||
*
|
||||
*/
|
||||
int xtGather(btree_t *t)
|
||||
{
|
||||
int rc = 0;
|
||||
xtpage_t *p;
|
||||
u64 bn;
|
||||
int index;
|
||||
btentry_t *e;
|
||||
struct btstack btstack;
|
||||
struct btsf *parent;
|
||||
|
||||
/* clear stack */
|
||||
BT_CLR(&btstack);
|
||||
|
||||
/*
|
||||
* start with root
|
||||
*
|
||||
* root resides in the inode
|
||||
*/
|
||||
bn = 0;
|
||||
XT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* new root is NOT pointed by a new entry
|
||||
if (p->header.flag & NEW)
|
||||
allocate new page lock;
|
||||
write a NEWPAGE log;
|
||||
*/
|
||||
|
||||
dopage:
|
||||
/*
|
||||
* first access of each page:
|
||||
*/
|
||||
/* process entries backward from last index */
|
||||
index = le16_to_cpu(p->header.nextindex) - 1;
|
||||
|
||||
if (p->header.flag & BT_LEAF) {
|
||||
/*
|
||||
* first access of each leaf page
|
||||
*/
|
||||
/* process leaf page entries backward */
|
||||
for (; index >= XTENTRYSTART; index--) {
|
||||
e = &p->xad[index];
|
||||
/*
|
||||
* if newpage, log NEWPAGE.
|
||||
*
|
||||
if (e->flag & XAD_NEW) {
|
||||
nfound =+ entry->length;
|
||||
update current page lock for the entry;
|
||||
newpage(entry);
|
||||
*
|
||||
* if moved, log move.
|
||||
*
|
||||
} else if (e->flag & XAD_MOVED) {
|
||||
reset flag;
|
||||
update current page lock for the entry;
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
/* unpin the leaf page */
|
||||
XT_PUTPAGE(mp);
|
||||
|
||||
/*
|
||||
* go back up to the parent page
|
||||
*/
|
||||
getParent:
|
||||
/* restore parent entry for the current child page */
|
||||
if ((parent = BT_POP(&btstack)) == NULL)
|
||||
/* current page must have been root */
|
||||
return 0;
|
||||
|
||||
if ((index = parent->index) == XTENTRYSTART) {
|
||||
/*
|
||||
* parent page scan completed
|
||||
*/
|
||||
/* go back up to the parent page */
|
||||
goto getParent;
|
||||
} else {
|
||||
/*
|
||||
* parent page has entries remaining
|
||||
*/
|
||||
/* get back the parent page */
|
||||
bn = parent->bn;
|
||||
XT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return -EIO;
|
||||
|
||||
/* first subroot page which
|
||||
* covers all new allocated blocks
|
||||
* itself not new/modified.
|
||||
* (if modified from split of descendent,
|
||||
* go down path of split page)
|
||||
|
||||
if (nfound == nnew &&
|
||||
!(p->header.flag & (NEW | MOD)))
|
||||
exit scan;
|
||||
*/
|
||||
|
||||
/* process parent page entries backward */
|
||||
index--;
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* first access of each internal page
|
||||
*/
|
||||
}
|
||||
|
||||
/*
|
||||
* internal page: go down to child page of current entry
|
||||
*/
|
||||
|
||||
/* save current parent entry for the child page */
|
||||
BT_PUSH(&btstack, bn, index);
|
||||
|
||||
/* get current entry for the child page */
|
||||
e = &p->xad[index];
|
||||
|
||||
/*
|
||||
* first access of each internal entry:
|
||||
*/
|
||||
/*
|
||||
* if new entry, log btree_tnewentry.
|
||||
*
|
||||
if (e->flag & XAD_NEW)
|
||||
update parent page lock for the entry;
|
||||
*/
|
||||
|
||||
/* release parent page */
|
||||
XT_PUTPAGE(mp);
|
||||
|
||||
/* get child page */
|
||||
bn = e->bn;
|
||||
XT_GETPAGE(ip, bn, mp, PSIZE, p, rc);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/*
|
||||
* first access of each non-root page:
|
||||
*/
|
||||
/*
|
||||
* if new, log btree_newpage.
|
||||
*
|
||||
if (p->header.flag & NEW)
|
||||
allocate new page lock;
|
||||
write a NEWPAGE log (next, prev);
|
||||
*/
|
||||
|
||||
/* process the child page */
|
||||
goto dopage;
|
||||
|
||||
out:
|
||||
return 0;
|
||||
}
|
||||
#endif /* _JFS_WIP */
|
||||
|
||||
|
||||
#ifdef CONFIG_JFS_STATISTICS
|
||||
int jfs_xtstat_read(char *buffer, char **start, off_t offset, int length,
|
||||
int *eof, void *data)
|
||||
|
@ -131,10 +131,4 @@ extern int xtRelocate(tid_t tid, struct inode *ip,
|
||||
extern int xtAppend(tid_t tid,
|
||||
struct inode *ip, int xflag, s64 xoff, int maxblocks,
|
||||
int *xlenp, s64 * xaddrp, int flag);
|
||||
|
||||
#ifdef _JFS_DEBUG_XTREE
|
||||
extern int xtDisplayTree(struct inode *ip);
|
||||
extern int xtDisplayPage(struct inode *ip, s64 bn, xtpage_t * p);
|
||||
#endif /* _JFS_DEBUG_XTREE */
|
||||
|
||||
#endif /* !_H_JFS_XTREE */
|
||||
|
@ -781,7 +781,7 @@ static int can_set_xattr(struct inode *inode, const char *name,
|
||||
if (IS_RDONLY(inode))
|
||||
return -EROFS;
|
||||
|
||||
if (IS_IMMUTABLE(inode) || IS_APPEND(inode) || S_ISLNK(inode->i_mode))
|
||||
if (IS_IMMUTABLE(inode) || IS_APPEND(inode))
|
||||
return -EPERM;
|
||||
|
||||
if(strncmp(name, XATTR_SYSTEM_PREFIX, XATTR_SYSTEM_PREFIX_LEN) == 0)
|
||||
@ -790,12 +790,12 @@ static int can_set_xattr(struct inode *inode, const char *name,
|
||||
*/
|
||||
return can_set_system_xattr(inode, name, value, value_len);
|
||||
|
||||
if(strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) != 0)
|
||||
if(strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) == 0)
|
||||
return (capable(CAP_SYS_ADMIN) ? 0 : -EPERM);
|
||||
|
||||
#ifdef CONFIG_JFS_SECURITY
|
||||
if (strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)
|
||||
!= 0)
|
||||
== 0)
|
||||
return 0; /* Leave it to the security module */
|
||||
#endif
|
||||
|
||||
|
@ -331,7 +331,7 @@ static ctl_table nlm_sysctls[] = {
|
||||
.ctl_name = CTL_UNNUMBERED,
|
||||
.procname = "nlm_grace_period",
|
||||
.data = &nlm_grace_period,
|
||||
.maxlen = sizeof(int),
|
||||
.maxlen = sizeof(unsigned long),
|
||||
.mode = 0644,
|
||||
.proc_handler = &proc_doulongvec_minmax,
|
||||
.extra1 = (unsigned long *) &nlm_grace_period_min,
|
||||
@ -341,7 +341,7 @@ static ctl_table nlm_sysctls[] = {
|
||||
.ctl_name = CTL_UNNUMBERED,
|
||||
.procname = "nlm_timeout",
|
||||
.data = &nlm_timeout,
|
||||
.maxlen = sizeof(int),
|
||||
.maxlen = sizeof(unsigned long),
|
||||
.mode = 0644,
|
||||
.proc_handler = &proc_doulongvec_minmax,
|
||||
.extra1 = (unsigned long *) &nlm_timeout_min,
|
||||
|
@ -1,21 +1,18 @@
|
||||
ToDo/Notes:
|
||||
- Find and fix bugs.
|
||||
- Checkpoint or disable the user space journal ($UsnJrnl).
|
||||
- In between ntfs_prepare/commit_write, need exclusion between
|
||||
simultaneous file extensions. Need perhaps an NInoResizeUnderway()
|
||||
flag which we can set in ntfs_prepare_write() and clear again in
|
||||
ntfs_commit_write(). Just have to be careful in readpage/writepage,
|
||||
as well as in truncate, that we play nice... We might need to have
|
||||
a data_size field in the ntfs_inode to store the real attribute
|
||||
length. Also need to be careful with initialized_size extention in
|
||||
simultaneous file extensions. This is given to us by holding i_sem
|
||||
on the inode. The only places in the kernel when a file is resized
|
||||
are prepare/commit write and truncate for both of which i_sem is
|
||||
held. Just have to be careful in readpage/writepage and all other
|
||||
helpers not running under i_sem that we play nice...
|
||||
Also need to be careful with initialized_size extention in
|
||||
ntfs_prepare_write. Basically, just be _very_ careful in this code...
|
||||
OTOH, perhaps i_sem, which is held accross generic_file_write is
|
||||
sufficient for synchronisation here. We then just need to make sure
|
||||
ntfs_readpage/writepage/truncate interoperate properly with us.
|
||||
UPDATE: The above is all ok as it is due to i_sem held. The only
|
||||
thing that needs to be checked is ntfs_writepage() which does not
|
||||
hold i_sem. It cannot change i_size but it needs to cope with a
|
||||
concurrent i_size change.
|
||||
UPDATE: The only things that need to be checked are read/writepage
|
||||
which do not hold i_sem. Note writepage cannot change i_size but it
|
||||
needs to cope with a concurrent i_size change, just like readpage.
|
||||
Also both need to cope with concurrent changes to the other sizes,
|
||||
i.e. initialized/allocated/compressed size, as well.
|
||||
- Implement mft.c::sync_mft_mirror_umount(). We currently will just
|
||||
leave the volume dirty on umount if the final iput(vol->mft_ino)
|
||||
causes a write of any mirrored mft records due to the mft mirror
|
||||
@ -25,12 +22,158 @@ ToDo/Notes:
|
||||
- Enable the code for setting the NT4 compatibility flag when we start
|
||||
making NTFS 1.2 specific modifications.
|
||||
|
||||
2.1.23-WIP
|
||||
2.1.23 - Implement extension of resident files and make writing safe as well as
|
||||
many bug fixes, cleanups, and enhancements...
|
||||
|
||||
- Add printk rate limiting for ntfs_warning() and ntfs_error() when
|
||||
compiled without debug. This avoids a possible denial of service
|
||||
attack. Thanks to Carl-Daniel Hailfinger from SuSE for pointing this
|
||||
out.
|
||||
- Fix compilation warnings on ia64. (Randy Dunlap)
|
||||
- Use i_size_{read,write}() instead of reading i_size by hand and cache
|
||||
the value where apropriate.
|
||||
- Add size_lock to the ntfs_inode structure. This is an rw spinlock
|
||||
and it locks against access to the inode sizes. Note, ->size_lock
|
||||
is also accessed from irq context so you must use the _irqsave and
|
||||
_irqrestore lock and unlock functions, respectively. Protect all
|
||||
accesses to allocated_size, initialized_size, and compressed_size.
|
||||
- Minor optimization to fs/ntfs/super.c::ntfs_statfs() and its helpers.
|
||||
- Implement extension of resident files in the regular file write code
|
||||
paths (fs/ntfs/aops.c::ntfs_{prepare,commit}_write()). At present
|
||||
this only works until the data attribute becomes too big for the mft
|
||||
record after which we abort the write returning -EOPNOTSUPP from
|
||||
ntfs_prepare_write().
|
||||
- Add disable_sparse mount option together with a per volume sparse
|
||||
enable bit which is set appropriately and a per inode sparse disable
|
||||
bit which is preset on some system file inodes as appropriate.
|
||||
- Enforce that sparse support is disabled on NTFS volumes pre 3.0.
|
||||
- Fix a bug in fs/ntfs/runlist.c::ntfs_mapping_pairs_decompress() in
|
||||
the creation of the unmapped runlist element for the base attribute
|
||||
extent.
|
||||
- Split ntfs_map_runlist() into ntfs_map_runlist() and a non-locking
|
||||
helper ntfs_map_runlist_nolock() which is used by ntfs_map_runlist().
|
||||
This allows us to map runlist fragments with the runlist lock already
|
||||
held without having to drop and reacquire it around the call. Adapt
|
||||
all callers.
|
||||
- Change ntfs_find_vcn() to ntfs_find_vcn_nolock() which takes a locked
|
||||
runlist. This allows us to find runlist elements with the runlist
|
||||
lock already held without having to drop and reacquire it around the
|
||||
call. Adapt all callers.
|
||||
- Change time to u64 in time.h::ntfs2utc() as it otherwise generates a
|
||||
warning in the do_div() call on sparc32. Thanks to Meelis Roos for
|
||||
the report and analysis of the warning.
|
||||
- Fix a nasty runlist merge bug when merging two holes.
|
||||
- Set the ntfs_inode->allocated_size to the real allocated size in the
|
||||
mft record for resident attributes (fs/ntfs/inode.c).
|
||||
- Small readability cleanup to use "a" instead of "ctx->attr"
|
||||
everywhere (fs/ntfs/inode.c).
|
||||
- Make fs/ntfs/namei.c::ntfs_get_{parent,dentry} static and move the
|
||||
definition of ntfs_export_ops from fs/ntfs/super.c to namei.c. Also,
|
||||
declare ntfs_export_ops in fs/ntfs/ntfs.h.
|
||||
- Correct sparse file handling. The compressed values need to be
|
||||
checked and set in the ntfs inode as done for compressed files and
|
||||
the compressed size needs to be used for vfs inode->i_blocks instead
|
||||
of the allocated size, again, as done for compressed files.
|
||||
- Add AT_EA in addition to AT_DATA to whitelist for being allowed to be
|
||||
non-resident in fs/ntfs/attrib.c::ntfs_attr_can_be_non_resident().
|
||||
- Add fs/ntfs/attrib.c::ntfs_attr_vcn_to_lcn_nolock() used by the new
|
||||
write code.
|
||||
- Fix bug in fs/ntfs/attrib.c::ntfs_find_vcn_nolock() where after
|
||||
dropping the read lock and taking the write lock we were not checking
|
||||
whether someone else did not already do the work we wanted to do.
|
||||
- Rename fs/ntfs/attrib.c::ntfs_find_vcn_nolock() to
|
||||
ntfs_attr_find_vcn_nolock() and update all callers.
|
||||
- Add fs/ntfs/attrib.[hc]::ntfs_attr_make_non_resident().
|
||||
- Fix sign of various error return values to be negative in
|
||||
fs/ntfs/lcnalloc.c.
|
||||
- Modify ->readpage and ->writepage (fs/ntfs/aops.c) so they detect and
|
||||
handle the case where an attribute is converted from resident to
|
||||
non-resident by a concurrent file write.
|
||||
- Remove checks for NULL before calling kfree() since kfree() does the
|
||||
checking itself. (Jesper Juhl)
|
||||
- Some utilities modify the boot sector but do not update the checksum.
|
||||
Thus, relax the checking in fs/ntfs/super.c::is_boot_sector_ntfs() to
|
||||
only emit a warning when the checksum is incorrect rather than
|
||||
refusing the mount. Thanks to Bernd Casimir for pointing this
|
||||
problem out.
|
||||
- Update attribute definition handling.
|
||||
- Add NTFS_MAX_CLUSTER_SIZE and NTFS_MAX_PAGES_PER_CLUSTER constants.
|
||||
- Use NTFS_MAX_CLUSTER_SIZE in super.c instead of hard coding 0x10000.
|
||||
- Use MAX_BUF_PER_PAGE instead of variable sized array allocation for
|
||||
better code generation and one less sparse warning in fs/ntfs/aops.c.
|
||||
- Remove spurious void pointer casts from fs/ntfs/. (Pekka Enberg)
|
||||
- Use C99 style structure initialization after memory allocation where
|
||||
possible (fs/ntfs/{attrib.c,index.c,super.c}). Thanks to Al Viro and
|
||||
Pekka Enberg.
|
||||
- Stamp the transaction log ($UsnJrnl), aka user space journal, if it
|
||||
is active on the volume and we are mounting read-write or remounting
|
||||
from read-only to read-write.
|
||||
- Fix a bug in address space operations error recovery code paths where
|
||||
if the runlist was not mapped at all and a mapping error occured we
|
||||
would leave the runlist locked on exit to the function so that the
|
||||
next access to the same file would try to take the lock and deadlock.
|
||||
- Detect the case when Windows has been suspended to disk on the volume
|
||||
to be mounted and if this is the case do not allow (re)mounting
|
||||
read-write. This is done by parsing hiberfil.sys if present.
|
||||
- Fix several occurences of a bug where we would perform 'var & ~const'
|
||||
with a 64-bit variable and a int, i.e. 32-bit, constant. This causes
|
||||
the higher order 32-bits of the 64-bit variable to be zeroed. To fix
|
||||
this cast the 'const' to the same 64-bit type as 'var'.
|
||||
- Change the runlist terminator of the newly allocated cluster(s) to
|
||||
LCN_ENOENT in ntfs_attr_make_non_resident(). Otherwise the runlist
|
||||
code gets confused.
|
||||
- Add an extra parameter @last_vcn to ntfs_get_size_for_mapping_pairs()
|
||||
and ntfs_mapping_pairs_build() to allow the runlist encoding to be
|
||||
partial which is desirable when filling holes in sparse attributes.
|
||||
Update all callers.
|
||||
- Change ntfs_map_runlist_nolock() to only decompress the mapping pairs
|
||||
if the requested vcn is inside it. Otherwise we get into problems
|
||||
when we try to map an out of bounds vcn because we then try to map
|
||||
the already mapped runlist fragment which causes
|
||||
ntfs_mapping_pairs_decompress() to fail and return error. Update
|
||||
ntfs_attr_find_vcn_nolock() accordingly.
|
||||
- Fix a nasty deadlock that appeared in recent kernels.
|
||||
The situation: VFS inode X on a mounted ntfs volume is dirty. For
|
||||
same inode X, the ntfs_inode is dirty and thus corresponding on-disk
|
||||
inode, i.e. mft record, which is in a dirty PAGE_CACHE_PAGE belonging
|
||||
to the table of inodes, i.e. $MFT, inode 0.
|
||||
What happens:
|
||||
Process 1: sys_sync()/umount()/whatever... calls
|
||||
__sync_single_inode() for $MFT -> do_writepages() -> write_page for
|
||||
the dirty page containing the on-disk inode X, the page is now locked
|
||||
-> ntfs_write_mst_block() which clears PageUptodate() on the page to
|
||||
prevent anyone else getting hold of it whilst it does the write out.
|
||||
This is necessary as the on-disk inode needs "fixups" applied before
|
||||
the write to disk which are removed again after the write and
|
||||
PageUptodate is then set again. It then analyses the page looking
|
||||
for dirty on-disk inodes and when it finds one it calls
|
||||
ntfs_may_write_mft_record() to see if it is safe to write this
|
||||
on-disk inode. This then calls ilookup5() to check if the
|
||||
corresponding VFS inode is in icache(). This in turn calls ifind()
|
||||
which waits on the inode lock via wait_on_inode whilst holding the
|
||||
global inode_lock.
|
||||
Process 2: pdflush results in a call to __sync_single_inode for the
|
||||
same VFS inode X on the ntfs volume. This locks the inode (I_LOCK)
|
||||
then calls write-inode -> ntfs_write_inode -> map_mft_record() ->
|
||||
read_cache_page() for the page (in page cache of table of inodes
|
||||
$MFT, inode 0) containing the on-disk inode. This page has
|
||||
PageUptodate() clear because of Process 1 (see above) so
|
||||
read_cache_page() blocks when it tries to take the page lock for the
|
||||
page so it can call ntfs_read_page().
|
||||
Thus Process 1 is holding the page lock on the page containing the
|
||||
on-disk inode X and it is waiting on the inode X to be unlocked in
|
||||
ifind() so it can write the page out and then unlock the page.
|
||||
And Process 2 is holding the inode lock on inode X and is waiting for
|
||||
the page to be unlocked so it can call ntfs_readpage() or discover
|
||||
that Process 1 set PageUptodate() again and use the page.
|
||||
Thus we have a deadlock due to ifind() waiting on the inode lock.
|
||||
The solution: The fix is to use the newly introduced
|
||||
ilookup5_nowait() which does not wait on the inode's lock and hence
|
||||
avoids the deadlock. This is safe as we do not care about the VFS
|
||||
inode and only use the fact that it is in the VFS inode cache and the
|
||||
fact that the vfs and ntfs inodes are one struct in memory to find
|
||||
the ntfs inode in memory if present. Also, the ntfs inode has its
|
||||
own locking so it does not matter if the vfs inode is locked.
|
||||
|
||||
2.1.22 - Many bug and race fixes and error handling improvements.
|
||||
|
||||
@ -1037,7 +1180,7 @@ tng-0.0.8 - 08/03/2002 - Now using BitKeeper, http://linux-ntfs.bkbits.net/
|
||||
- Further runlist merging work. (Richard Russon)
|
||||
- Backwards compatibility for gcc-2.95. (Richard Russon)
|
||||
- Update to kernel 2.5.5-pre1 and rediff the now tiny patch.
|
||||
- Convert to new file system declaration using ->ntfs_get_sb() and
|
||||
- Convert to new filesystem declaration using ->ntfs_get_sb() and
|
||||
replacing ntfs_read_super() with ntfs_fill_super().
|
||||
- Set s_maxbytes to MAX_LFS_FILESIZE to avoid page cache page index
|
||||
overflow on 32-bit architectures.
|
||||
@ -1333,7 +1476,7 @@ tng-0.0.1 - The first useful version.
|
||||
The driver is now actually useful! Yey. (-: It undoubtedly has got bugs
|
||||
though and it doesn't implement accesssing compressed files yet. Also,
|
||||
accessing files with attribute list attributes is not implemented yet
|
||||
either. But for small or simple file systems it should work and allow
|
||||
either. But for small or simple filesystems it should work and allow
|
||||
you to list directories, use stat on directory entries and the file
|
||||
system, open, read, mmap and llseek around in files. A big mile stone
|
||||
has been reached!
|
||||
@ -1341,7 +1484,7 @@ tng-0.0.1 - The first useful version.
|
||||
tng-0.0.0 - Initial version tag.
|
||||
|
||||
Initial driver implementation. The driver can mount and umount simple
|
||||
NTFS file systems (i.e. ones without attribute lists in the system
|
||||
NTFS filesystems (i.e. ones without attribute lists in the system
|
||||
files). If the mount fails there might be problems in the error handling
|
||||
code paths, so be warned. Otherwise it seems to be loading the system
|
||||
files nicely and the mft record read mapping/unmapping seems to be
|
||||
|
@ -6,7 +6,7 @@ ntfs-objs := aops.o attrib.o collate.o compress.o debug.o dir.o file.o \
|
||||
index.o inode.o mft.o mst.o namei.o runlist.o super.o sysctl.o \
|
||||
unistr.o upcase.o
|
||||
|
||||
EXTRA_CFLAGS = -DNTFS_VERSION=\"2.1.22\"
|
||||
EXTRA_CFLAGS = -DNTFS_VERSION=\"2.1.23\"
|
||||
|
||||
ifeq ($(CONFIG_NTFS_DEBUG),y)
|
||||
EXTRA_CFLAGS += -DDEBUG
|
||||
@ -15,5 +15,5 @@ endif
|
||||
ifeq ($(CONFIG_NTFS_RW),y)
|
||||
EXTRA_CFLAGS += -DNTFS_RW
|
||||
|
||||
ntfs-objs += bitmap.o lcnalloc.o logfile.o quota.o
|
||||
ntfs-objs += bitmap.o lcnalloc.o logfile.o quota.o usnjrnl.o
|
||||
endif
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user